2026-02-13 02:24:18.456180 | Job console starting 2026-02-13 02:24:18.467356 | Updating git repos 2026-02-13 02:24:18.537959 | Cloning repos into workspace 2026-02-13 02:24:18.757190 | Restoring repo states 2026-02-13 02:24:18.778787 | Merging changes 2026-02-13 02:24:18.778954 | Checking out repos 2026-02-13 02:24:19.072797 | Preparing playbooks 2026-02-13 02:24:19.794277 | Running Ansible setup 2026-02-13 02:24:24.198781 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-13 02:24:24.949326 | 2026-02-13 02:24:24.949481 | PLAY [Base pre] 2026-02-13 02:24:24.966302 | 2026-02-13 02:24:24.966429 | TASK [Setup log path fact] 2026-02-13 02:24:24.996345 | orchestrator | ok 2026-02-13 02:24:25.013634 | 2026-02-13 02:24:25.013768 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-13 02:24:25.059650 | orchestrator | ok 2026-02-13 02:24:25.073331 | 2026-02-13 02:24:25.073467 | TASK [emit-job-header : Print job information] 2026-02-13 02:24:25.126366 | # Job Information 2026-02-13 02:24:25.126614 | Ansible Version: 2.16.14 2026-02-13 02:24:25.126663 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-13 02:24:25.126710 | Pipeline: periodic-midnight 2026-02-13 02:24:25.126743 | Executor: 521e9411259a 2026-02-13 02:24:25.126772 | Triggered by: https://github.com/osism/testbed 2026-02-13 02:24:25.126804 | Event ID: 4e2cd8ed99964abc9d6a890e7fc2d18d 2026-02-13 02:24:25.136838 | 2026-02-13 02:24:25.137000 | LOOP [emit-job-header : Print node information] 2026-02-13 02:24:25.269852 | orchestrator | ok: 2026-02-13 02:24:25.270262 | orchestrator | # Node Information 2026-02-13 02:24:25.270331 | orchestrator | Inventory Hostname: orchestrator 2026-02-13 02:24:25.270376 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-13 02:24:25.270415 | orchestrator | Username: zuul-testbed03 2026-02-13 02:24:25.270452 | orchestrator | Distro: Debian 12.13 2026-02-13 02:24:25.270493 | orchestrator | Provider: static-testbed 2026-02-13 02:24:25.270531 | orchestrator | Region: 2026-02-13 02:24:25.270568 | orchestrator | Label: testbed-orchestrator 2026-02-13 02:24:25.270604 | orchestrator | Product Name: OpenStack Nova 2026-02-13 02:24:25.270638 | orchestrator | Interface IP: 81.163.193.140 2026-02-13 02:24:25.300367 | 2026-02-13 02:24:25.300537 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-13 02:24:25.761456 | orchestrator -> localhost | changed 2026-02-13 02:24:25.770137 | 2026-02-13 02:24:25.770258 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-13 02:24:26.868407 | orchestrator -> localhost | changed 2026-02-13 02:24:26.883533 | 2026-02-13 02:24:26.883659 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-13 02:24:27.153535 | orchestrator -> localhost | ok 2026-02-13 02:24:27.170102 | 2026-02-13 02:24:27.170288 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-13 02:24:27.221434 | orchestrator | ok 2026-02-13 02:24:27.243156 | orchestrator | included: /var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-13 02:24:27.251224 | 2026-02-13 02:24:27.251321 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-13 02:24:29.374353 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-13 02:24:29.374583 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/eaf6616a4e9e46b08359ec9d54172af9_id_rsa 2026-02-13 02:24:29.374622 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/eaf6616a4e9e46b08359ec9d54172af9_id_rsa.pub 2026-02-13 02:24:29.374649 | orchestrator -> localhost | The key fingerprint is: 2026-02-13 02:24:29.374674 | orchestrator -> localhost | SHA256:FhhY89LjNhzQbVpPCXxDbsq7tAQA0c9UU1cHh16i1i0 zuul-build-sshkey 2026-02-13 02:24:29.374696 | orchestrator -> localhost | The key's randomart image is: 2026-02-13 02:24:29.374734 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-13 02:24:29.374756 | orchestrator -> localhost | | o==. +++o.o++| 2026-02-13 02:24:29.374778 | orchestrator -> localhost | | ...Bo =o=.o.o| 2026-02-13 02:24:29.374798 | orchestrator -> localhost | | ++*+ +o= + | 2026-02-13 02:24:29.374818 | orchestrator -> localhost | | =+= o+ E .| 2026-02-13 02:24:29.374859 | orchestrator -> localhost | | S o. . | 2026-02-13 02:24:29.374882 | orchestrator -> localhost | | o o . | 2026-02-13 02:24:29.374903 | orchestrator -> localhost | | + | 2026-02-13 02:24:29.374922 | orchestrator -> localhost | | o o | 2026-02-13 02:24:29.374942 | orchestrator -> localhost | | o | 2026-02-13 02:24:29.374962 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-13 02:24:29.375025 | orchestrator -> localhost | ok: Runtime: 0:00:01.604818 2026-02-13 02:24:29.383689 | 2026-02-13 02:24:29.383810 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-13 02:24:29.415792 | orchestrator | ok 2026-02-13 02:24:29.426554 | orchestrator | included: /var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-13 02:24:29.440932 | 2026-02-13 02:24:29.441111 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-13 02:24:29.465149 | orchestrator | skipping: Conditional result was False 2026-02-13 02:24:29.473224 | 2026-02-13 02:24:29.473341 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-13 02:24:30.131680 | orchestrator | changed 2026-02-13 02:24:30.140274 | 2026-02-13 02:24:30.140402 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-13 02:24:30.482877 | orchestrator | ok 2026-02-13 02:24:30.492458 | 2026-02-13 02:24:30.492595 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-13 02:24:30.948624 | orchestrator | ok 2026-02-13 02:24:30.957706 | 2026-02-13 02:24:30.957840 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-13 02:24:31.425407 | orchestrator | ok 2026-02-13 02:24:31.431803 | 2026-02-13 02:24:31.431910 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-13 02:24:31.456400 | orchestrator | skipping: Conditional result was False 2026-02-13 02:24:31.464962 | 2026-02-13 02:24:31.465094 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-13 02:24:31.935646 | orchestrator -> localhost | changed 2026-02-13 02:24:31.954816 | 2026-02-13 02:24:31.954996 | TASK [add-build-sshkey : Add back temp key] 2026-02-13 02:24:32.282289 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/eaf6616a4e9e46b08359ec9d54172af9_id_rsa (zuul-build-sshkey) 2026-02-13 02:24:32.282531 | orchestrator -> localhost | ok: Runtime: 0:00:00.020400 2026-02-13 02:24:32.290042 | 2026-02-13 02:24:32.290153 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-13 02:24:32.731238 | orchestrator | ok 2026-02-13 02:24:32.738398 | 2026-02-13 02:24:32.738532 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-13 02:24:32.762528 | orchestrator | skipping: Conditional result was False 2026-02-13 02:24:32.812568 | 2026-02-13 02:24:32.812703 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-13 02:24:33.278123 | orchestrator | ok 2026-02-13 02:24:33.299220 | 2026-02-13 02:24:33.299425 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-13 02:24:33.331600 | orchestrator | ok 2026-02-13 02:24:33.340535 | 2026-02-13 02:24:33.340652 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-13 02:24:33.648363 | orchestrator -> localhost | ok 2026-02-13 02:24:33.664767 | 2026-02-13 02:24:33.664952 | TASK [validate-host : Collect information about the host] 2026-02-13 02:24:34.956590 | orchestrator | ok 2026-02-13 02:24:34.970302 | 2026-02-13 02:24:34.970420 | TASK [validate-host : Sanitize hostname] 2026-02-13 02:24:35.021319 | orchestrator | ok 2026-02-13 02:24:35.027477 | 2026-02-13 02:24:35.027617 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-13 02:24:35.582070 | orchestrator -> localhost | changed 2026-02-13 02:24:35.595183 | 2026-02-13 02:24:35.595351 | TASK [validate-host : Collect information about zuul worker] 2026-02-13 02:24:36.063668 | orchestrator | ok 2026-02-13 02:24:36.072849 | 2026-02-13 02:24:36.073008 | TASK [validate-host : Write out all zuul information for each host] 2026-02-13 02:24:36.626916 | orchestrator -> localhost | changed 2026-02-13 02:24:36.647628 | 2026-02-13 02:24:36.647763 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-13 02:24:36.935922 | orchestrator | ok 2026-02-13 02:24:36.944914 | 2026-02-13 02:24:36.945095 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-13 02:24:58.230377 | orchestrator | changed: 2026-02-13 02:24:58.230660 | orchestrator | .d..t...... src/ 2026-02-13 02:24:58.230711 | orchestrator | .d..t...... src/github.com/ 2026-02-13 02:24:58.230747 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-13 02:24:58.230780 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-13 02:24:58.230812 | orchestrator | RedHat.yml 2026-02-13 02:24:58.246691 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-13 02:24:58.246708 | orchestrator | RedHat.yml 2026-02-13 02:24:58.246759 | orchestrator | = 1.53.0"... 2026-02-13 02:25:12.417669 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-02-13 02:25:12.569471 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-13 02:25:13.023490 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-13 02:25:13.407163 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-13 02:25:14.364234 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-13 02:25:14.429200 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-13 02:25:14.897910 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-13 02:25:14.898011 | orchestrator | 2026-02-13 02:25:14.898052 | orchestrator | Providers are signed by their developers. 2026-02-13 02:25:14.898057 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-13 02:25:14.898070 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-13 02:25:14.898106 | orchestrator | 2026-02-13 02:25:14.898111 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-13 02:25:14.898116 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-13 02:25:14.898125 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-13 02:25:14.898135 | orchestrator | you run "tofu init" in the future. 2026-02-13 02:25:14.898558 | orchestrator | 2026-02-13 02:25:14.898612 | orchestrator | OpenTofu has been successfully initialized! 2026-02-13 02:25:14.898636 | orchestrator | 2026-02-13 02:25:14.898642 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-13 02:25:14.898647 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-13 02:25:14.898651 | orchestrator | should now work. 2026-02-13 02:25:14.898655 | orchestrator | 2026-02-13 02:25:14.898659 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-13 02:25:14.898663 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-13 02:25:14.898674 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-13 02:25:15.071079 | orchestrator | Created and switched to workspace "ci"! 2026-02-13 02:25:15.071126 | orchestrator | 2026-02-13 02:25:15.071503 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-13 02:25:15.071512 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-13 02:25:15.071538 | orchestrator | for this configuration. 2026-02-13 02:25:15.217663 | orchestrator | ci.auto.tfvars 2026-02-13 02:25:15.994903 | orchestrator | default_custom.tf 2026-02-13 02:25:18.395838 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-13 02:25:18.978861 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-13 02:25:19.211588 | orchestrator | 2026-02-13 02:25:19.211657 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-13 02:25:19.211665 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-13 02:25:19.211670 | orchestrator | + create 2026-02-13 02:25:19.211675 | orchestrator | <= read (data resources) 2026-02-13 02:25:19.211686 | orchestrator | 2026-02-13 02:25:19.211690 | orchestrator | OpenTofu will perform the following actions: 2026-02-13 02:25:19.211733 | orchestrator | 2026-02-13 02:25:19.211740 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-13 02:25:19.211744 | orchestrator | # (config refers to values not yet known) 2026-02-13 02:25:19.211748 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-13 02:25:19.211752 | orchestrator | + checksum = (known after apply) 2026-02-13 02:25:19.211757 | orchestrator | + created_at = (known after apply) 2026-02-13 02:25:19.211761 | orchestrator | + file = (known after apply) 2026-02-13 02:25:19.211765 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.211789 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.211793 | orchestrator | + min_disk_gb = (known after apply) 2026-02-13 02:25:19.211798 | orchestrator | + min_ram_mb = (known after apply) 2026-02-13 02:25:19.211801 | orchestrator | + most_recent = true 2026-02-13 02:25:19.211806 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.211810 | orchestrator | + protected = (known after apply) 2026-02-13 02:25:19.211814 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.211820 | orchestrator | + schema = (known after apply) 2026-02-13 02:25:19.211824 | orchestrator | + size_bytes = (known after apply) 2026-02-13 02:25:19.211828 | orchestrator | + tags = (known after apply) 2026-02-13 02:25:19.211832 | orchestrator | + updated_at = (known after apply) 2026-02-13 02:25:19.211836 | orchestrator | } 2026-02-13 02:25:19.211993 | orchestrator | 2026-02-13 02:25:19.212005 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-13 02:25:19.212010 | orchestrator | # (config refers to values not yet known) 2026-02-13 02:25:19.212015 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-13 02:25:19.212022 | orchestrator | + checksum = (known after apply) 2026-02-13 02:25:19.212028 | orchestrator | + created_at = (known after apply) 2026-02-13 02:25:19.212034 | orchestrator | + file = (known after apply) 2026-02-13 02:25:19.212040 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212045 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.212051 | orchestrator | + min_disk_gb = (known after apply) 2026-02-13 02:25:19.212056 | orchestrator | + min_ram_mb = (known after apply) 2026-02-13 02:25:19.212062 | orchestrator | + most_recent = true 2026-02-13 02:25:19.212068 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.212073 | orchestrator | + protected = (known after apply) 2026-02-13 02:25:19.212079 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.212085 | orchestrator | + schema = (known after apply) 2026-02-13 02:25:19.212091 | orchestrator | + size_bytes = (known after apply) 2026-02-13 02:25:19.212097 | orchestrator | + tags = (known after apply) 2026-02-13 02:25:19.212103 | orchestrator | + updated_at = (known after apply) 2026-02-13 02:25:19.212109 | orchestrator | } 2026-02-13 02:25:19.212119 | orchestrator | 2026-02-13 02:25:19.212125 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-13 02:25:19.212131 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-13 02:25:19.212137 | orchestrator | + content = (known after apply) 2026-02-13 02:25:19.212145 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-13 02:25:19.212151 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-13 02:25:19.212158 | orchestrator | + content_md5 = (known after apply) 2026-02-13 02:25:19.212165 | orchestrator | + content_sha1 = (known after apply) 2026-02-13 02:25:19.212169 | orchestrator | + content_sha256 = (known after apply) 2026-02-13 02:25:19.212173 | orchestrator | + content_sha512 = (known after apply) 2026-02-13 02:25:19.212177 | orchestrator | + directory_permission = "0777" 2026-02-13 02:25:19.212181 | orchestrator | + file_permission = "0644" 2026-02-13 02:25:19.212186 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-13 02:25:19.212190 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212193 | orchestrator | } 2026-02-13 02:25:19.212261 | orchestrator | 2026-02-13 02:25:19.212267 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-13 02:25:19.212271 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-13 02:25:19.212275 | orchestrator | + content = (known after apply) 2026-02-13 02:25:19.212279 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-13 02:25:19.212283 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-13 02:25:19.212287 | orchestrator | + content_md5 = (known after apply) 2026-02-13 02:25:19.212290 | orchestrator | + content_sha1 = (known after apply) 2026-02-13 02:25:19.212294 | orchestrator | + content_sha256 = (known after apply) 2026-02-13 02:25:19.212298 | orchestrator | + content_sha512 = (known after apply) 2026-02-13 02:25:19.212302 | orchestrator | + directory_permission = "0777" 2026-02-13 02:25:19.212306 | orchestrator | + file_permission = "0644" 2026-02-13 02:25:19.212318 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-13 02:25:19.212322 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212372 | orchestrator | } 2026-02-13 02:25:19.212394 | orchestrator | 2026-02-13 02:25:19.212408 | orchestrator | # local_file.inventory will be created 2026-02-13 02:25:19.212413 | orchestrator | + resource "local_file" "inventory" { 2026-02-13 02:25:19.212417 | orchestrator | + content = (known after apply) 2026-02-13 02:25:19.212420 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-13 02:25:19.212424 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-13 02:25:19.212428 | orchestrator | + content_md5 = (known after apply) 2026-02-13 02:25:19.212432 | orchestrator | + content_sha1 = (known after apply) 2026-02-13 02:25:19.212436 | orchestrator | + content_sha256 = (known after apply) 2026-02-13 02:25:19.212440 | orchestrator | + content_sha512 = (known after apply) 2026-02-13 02:25:19.212444 | orchestrator | + directory_permission = "0777" 2026-02-13 02:25:19.212448 | orchestrator | + file_permission = "0644" 2026-02-13 02:25:19.212452 | orchestrator | + filename = "inventory.ci" 2026-02-13 02:25:19.212455 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212459 | orchestrator | } 2026-02-13 02:25:19.212543 | orchestrator | 2026-02-13 02:25:19.212549 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-13 02:25:19.212553 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-13 02:25:19.212557 | orchestrator | + content = (sensitive value) 2026-02-13 02:25:19.212561 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-13 02:25:19.212565 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-13 02:25:19.212569 | orchestrator | + content_md5 = (known after apply) 2026-02-13 02:25:19.212573 | orchestrator | + content_sha1 = (known after apply) 2026-02-13 02:25:19.212577 | orchestrator | + content_sha256 = (known after apply) 2026-02-13 02:25:19.212580 | orchestrator | + content_sha512 = (known after apply) 2026-02-13 02:25:19.212584 | orchestrator | + directory_permission = "0700" 2026-02-13 02:25:19.212588 | orchestrator | + file_permission = "0600" 2026-02-13 02:25:19.212592 | orchestrator | + filename = ".id_rsa.ci" 2026-02-13 02:25:19.212595 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212599 | orchestrator | } 2026-02-13 02:25:19.212605 | orchestrator | 2026-02-13 02:25:19.212609 | orchestrator | # null_resource.node_semaphore will be created 2026-02-13 02:25:19.212612 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-13 02:25:19.212616 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212620 | orchestrator | } 2026-02-13 02:25:19.212684 | orchestrator | 2026-02-13 02:25:19.212689 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-13 02:25:19.212694 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-13 02:25:19.212697 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.212701 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.212705 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212709 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.212713 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.212717 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-13 02:25:19.212720 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.212724 | orchestrator | + size = 80 2026-02-13 02:25:19.212728 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.212732 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.212735 | orchestrator | } 2026-02-13 02:25:19.212816 | orchestrator | 2026-02-13 02:25:19.212821 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-13 02:25:19.212825 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.212828 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.212832 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.212836 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212845 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.212849 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.212853 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-13 02:25:19.212857 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.212860 | orchestrator | + size = 80 2026-02-13 02:25:19.212864 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.212868 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.212872 | orchestrator | } 2026-02-13 02:25:19.212892 | orchestrator | 2026-02-13 02:25:19.212897 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-13 02:25:19.212901 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.212905 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.212908 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.212912 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.212916 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.212920 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.212924 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-13 02:25:19.212928 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.212931 | orchestrator | + size = 80 2026-02-13 02:25:19.212935 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.212939 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.212943 | orchestrator | } 2026-02-13 02:25:19.213018 | orchestrator | 2026-02-13 02:25:19.213025 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-13 02:25:19.213109 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.213115 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213119 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213123 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213127 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.213131 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213135 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-13 02:25:19.213138 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213142 | orchestrator | + size = 80 2026-02-13 02:25:19.213146 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213150 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213153 | orchestrator | } 2026-02-13 02:25:19.213159 | orchestrator | 2026-02-13 02:25:19.213163 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-13 02:25:19.213167 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.213171 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213175 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213179 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213182 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.213186 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213194 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-13 02:25:19.213198 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213202 | orchestrator | + size = 80 2026-02-13 02:25:19.213206 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213210 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213214 | orchestrator | } 2026-02-13 02:25:19.213219 | orchestrator | 2026-02-13 02:25:19.213223 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-13 02:25:19.213227 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.213230 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213234 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213238 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213247 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.213251 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213255 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-13 02:25:19.213258 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213262 | orchestrator | + size = 80 2026-02-13 02:25:19.213266 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213270 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213274 | orchestrator | } 2026-02-13 02:25:19.213279 | orchestrator | 2026-02-13 02:25:19.213283 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-13 02:25:19.213286 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-13 02:25:19.213290 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213294 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213298 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213302 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.213305 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213309 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-13 02:25:19.213313 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213317 | orchestrator | + size = 80 2026-02-13 02:25:19.213320 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213324 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213328 | orchestrator | } 2026-02-13 02:25:19.213333 | orchestrator | 2026-02-13 02:25:19.213337 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-13 02:25:19.213341 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213345 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213349 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213353 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213357 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213361 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-13 02:25:19.213365 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213369 | orchestrator | + size = 20 2026-02-13 02:25:19.213373 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213376 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213380 | orchestrator | } 2026-02-13 02:25:19.213385 | orchestrator | 2026-02-13 02:25:19.213389 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-13 02:25:19.213393 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213397 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213401 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213405 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213408 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213412 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-13 02:25:19.213416 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213420 | orchestrator | + size = 20 2026-02-13 02:25:19.213423 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213427 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213431 | orchestrator | } 2026-02-13 02:25:19.213436 | orchestrator | 2026-02-13 02:25:19.213440 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-13 02:25:19.213444 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213448 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213451 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213455 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213459 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213463 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-13 02:25:19.213466 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213473 | orchestrator | + size = 20 2026-02-13 02:25:19.213477 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213481 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213485 | orchestrator | } 2026-02-13 02:25:19.213521 | orchestrator | 2026-02-13 02:25:19.213526 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-13 02:25:19.213530 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213534 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213537 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213541 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213545 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213549 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-13 02:25:19.213552 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213556 | orchestrator | + size = 20 2026-02-13 02:25:19.213560 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213564 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213568 | orchestrator | } 2026-02-13 02:25:19.213612 | orchestrator | 2026-02-13 02:25:19.213616 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-13 02:25:19.213620 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213624 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213628 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213632 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213635 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213639 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-13 02:25:19.213643 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213649 | orchestrator | + size = 20 2026-02-13 02:25:19.213653 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213657 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213661 | orchestrator | } 2026-02-13 02:25:19.213667 | orchestrator | 2026-02-13 02:25:19.213672 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-13 02:25:19.213676 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213680 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213684 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213688 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213691 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213695 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-13 02:25:19.213699 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213703 | orchestrator | + size = 20 2026-02-13 02:25:19.213707 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213710 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213714 | orchestrator | } 2026-02-13 02:25:19.213737 | orchestrator | 2026-02-13 02:25:19.213741 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-13 02:25:19.213745 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213749 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213753 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213756 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213760 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213764 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-13 02:25:19.213768 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213771 | orchestrator | + size = 20 2026-02-13 02:25:19.213775 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213779 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213783 | orchestrator | } 2026-02-13 02:25:19.213805 | orchestrator | 2026-02-13 02:25:19.213810 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-13 02:25:19.213814 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213821 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213824 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213828 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213832 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213836 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-13 02:25:19.213839 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213843 | orchestrator | + size = 20 2026-02-13 02:25:19.213847 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213851 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213855 | orchestrator | } 2026-02-13 02:25:19.213876 | orchestrator | 2026-02-13 02:25:19.213880 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-13 02:25:19.213884 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-13 02:25:19.213888 | orchestrator | + attachment = (known after apply) 2026-02-13 02:25:19.213892 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.213896 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.213899 | orchestrator | + metadata = (known after apply) 2026-02-13 02:25:19.213903 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-13 02:25:19.213907 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.213911 | orchestrator | + size = 20 2026-02-13 02:25:19.213914 | orchestrator | + volume_retype_policy = "never" 2026-02-13 02:25:19.213918 | orchestrator | + volume_type = "ssd" 2026-02-13 02:25:19.213922 | orchestrator | } 2026-02-13 02:25:19.214256 | orchestrator | 2026-02-13 02:25:19.214273 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-13 02:25:19.214277 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-13 02:25:19.214281 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.214285 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.214289 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.214293 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.214298 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.214301 | orchestrator | + config_drive = true 2026-02-13 02:25:19.214305 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.214309 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.214313 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-13 02:25:19.214317 | orchestrator | + force_delete = false 2026-02-13 02:25:19.214321 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.214324 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.214328 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.214332 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.214336 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.214339 | orchestrator | + name = "testbed-manager" 2026-02-13 02:25:19.214343 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.214347 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.214351 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.214354 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.214358 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.214362 | orchestrator | + user_data = (sensitive value) 2026-02-13 02:25:19.214366 | orchestrator | 2026-02-13 02:25:19.214370 | orchestrator | + block_device { 2026-02-13 02:25:19.214374 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.214378 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.214387 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.214391 | orchestrator | + multiattach = false 2026-02-13 02:25:19.214395 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.214398 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214408 | orchestrator | } 2026-02-13 02:25:19.214412 | orchestrator | 2026-02-13 02:25:19.214416 | orchestrator | + network { 2026-02-13 02:25:19.214420 | orchestrator | + access_network = false 2026-02-13 02:25:19.214423 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.214427 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.214431 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.214435 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.214439 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.214443 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214446 | orchestrator | } 2026-02-13 02:25:19.214450 | orchestrator | } 2026-02-13 02:25:19.214457 | orchestrator | 2026-02-13 02:25:19.214461 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-13 02:25:19.214465 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.214469 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.214472 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.214476 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.214480 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.214484 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.214487 | orchestrator | + config_drive = true 2026-02-13 02:25:19.214491 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.214495 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.214499 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.214502 | orchestrator | + force_delete = false 2026-02-13 02:25:19.214506 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.214510 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.214514 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.214517 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.214521 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.214525 | orchestrator | + name = "testbed-node-0" 2026-02-13 02:25:19.214529 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.214532 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.214536 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.214540 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.214544 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.214547 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.214552 | orchestrator | 2026-02-13 02:25:19.214555 | orchestrator | + block_device { 2026-02-13 02:25:19.214559 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.214563 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.214567 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.214570 | orchestrator | + multiattach = false 2026-02-13 02:25:19.214574 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.214578 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214582 | orchestrator | } 2026-02-13 02:25:19.214585 | orchestrator | 2026-02-13 02:25:19.214589 | orchestrator | + network { 2026-02-13 02:25:19.214593 | orchestrator | + access_network = false 2026-02-13 02:25:19.214597 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.214601 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.214604 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.214608 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.214612 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.214616 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214619 | orchestrator | } 2026-02-13 02:25:19.214623 | orchestrator | } 2026-02-13 02:25:19.214629 | orchestrator | 2026-02-13 02:25:19.214633 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-13 02:25:19.214637 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.214641 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.214649 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.214653 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.214657 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.214661 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.214664 | orchestrator | + config_drive = true 2026-02-13 02:25:19.214668 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.214672 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.214676 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.214679 | orchestrator | + force_delete = false 2026-02-13 02:25:19.214683 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.214687 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.214690 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.214694 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.214698 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.214702 | orchestrator | + name = "testbed-node-1" 2026-02-13 02:25:19.214705 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.214709 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.214713 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.214717 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.214720 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.214724 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.214728 | orchestrator | 2026-02-13 02:25:19.214732 | orchestrator | + block_device { 2026-02-13 02:25:19.214736 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.214739 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.214743 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.214747 | orchestrator | + multiattach = false 2026-02-13 02:25:19.214750 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.214754 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214758 | orchestrator | } 2026-02-13 02:25:19.214762 | orchestrator | 2026-02-13 02:25:19.214766 | orchestrator | + network { 2026-02-13 02:25:19.214769 | orchestrator | + access_network = false 2026-02-13 02:25:19.214773 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.214777 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.214781 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.214785 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.214788 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.214792 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.214796 | orchestrator | } 2026-02-13 02:25:19.214800 | orchestrator | } 2026-02-13 02:25:19.214856 | orchestrator | 2026-02-13 02:25:19.214862 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-13 02:25:19.214866 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.214870 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.214873 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.214878 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.214882 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.214889 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.214893 | orchestrator | + config_drive = true 2026-02-13 02:25:19.214897 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.214900 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.214904 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.214908 | orchestrator | + force_delete = false 2026-02-13 02:25:19.214911 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.214915 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.214919 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.214927 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.214931 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.214934 | orchestrator | + name = "testbed-node-2" 2026-02-13 02:25:19.214938 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.214942 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.214945 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.214949 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.214953 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.214957 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.214998 | orchestrator | 2026-02-13 02:25:19.215002 | orchestrator | + block_device { 2026-02-13 02:25:19.215006 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.215009 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.215013 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.215017 | orchestrator | + multiattach = false 2026-02-13 02:25:19.215021 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.215024 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215028 | orchestrator | } 2026-02-13 02:25:19.215032 | orchestrator | 2026-02-13 02:25:19.215036 | orchestrator | + network { 2026-02-13 02:25:19.215040 | orchestrator | + access_network = false 2026-02-13 02:25:19.215043 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.215047 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.215051 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.215055 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.215059 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.215064 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215071 | orchestrator | } 2026-02-13 02:25:19.215079 | orchestrator | } 2026-02-13 02:25:19.215091 | orchestrator | 2026-02-13 02:25:19.215097 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-13 02:25:19.215103 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.215109 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.215115 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.215121 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.215128 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.215134 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.215140 | orchestrator | + config_drive = true 2026-02-13 02:25:19.215146 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.215152 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.215159 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.215165 | orchestrator | + force_delete = false 2026-02-13 02:25:19.215172 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.215178 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215186 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.215192 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.215199 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.215204 | orchestrator | + name = "testbed-node-3" 2026-02-13 02:25:19.215208 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.215213 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215216 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.215220 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.215224 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.215228 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.215232 | orchestrator | 2026-02-13 02:25:19.215236 | orchestrator | + block_device { 2026-02-13 02:25:19.215248 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.215252 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.215256 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.215264 | orchestrator | + multiattach = false 2026-02-13 02:25:19.215268 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.215272 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215276 | orchestrator | } 2026-02-13 02:25:19.215280 | orchestrator | 2026-02-13 02:25:19.215284 | orchestrator | + network { 2026-02-13 02:25:19.215287 | orchestrator | + access_network = false 2026-02-13 02:25:19.215368 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.215373 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.215381 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.215389 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.215395 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.215401 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215407 | orchestrator | } 2026-02-13 02:25:19.215413 | orchestrator | } 2026-02-13 02:25:19.215423 | orchestrator | 2026-02-13 02:25:19.215430 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-13 02:25:19.215436 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.215442 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.215448 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.215453 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.215459 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.215464 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.215470 | orchestrator | + config_drive = true 2026-02-13 02:25:19.215475 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.215481 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.215487 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.215493 | orchestrator | + force_delete = false 2026-02-13 02:25:19.215500 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.215505 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215511 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.215517 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.215523 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.215529 | orchestrator | + name = "testbed-node-4" 2026-02-13 02:25:19.215535 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.215541 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215547 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.215553 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.215559 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.215566 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.215572 | orchestrator | 2026-02-13 02:25:19.215578 | orchestrator | + block_device { 2026-02-13 02:25:19.215584 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.215591 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.215597 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.215604 | orchestrator | + multiattach = false 2026-02-13 02:25:19.215610 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.215617 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215623 | orchestrator | } 2026-02-13 02:25:19.215630 | orchestrator | 2026-02-13 02:25:19.215636 | orchestrator | + network { 2026-02-13 02:25:19.215643 | orchestrator | + access_network = false 2026-02-13 02:25:19.215647 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.215651 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.215655 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.215659 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.215662 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.215666 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215670 | orchestrator | } 2026-02-13 02:25:19.215674 | orchestrator | } 2026-02-13 02:25:19.215687 | orchestrator | 2026-02-13 02:25:19.215691 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-13 02:25:19.215695 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-13 02:25:19.215699 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-13 02:25:19.215703 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-13 02:25:19.215706 | orchestrator | + all_metadata = (known after apply) 2026-02-13 02:25:19.215710 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.215714 | orchestrator | + availability_zone = "nova" 2026-02-13 02:25:19.215717 | orchestrator | + config_drive = true 2026-02-13 02:25:19.215721 | orchestrator | + created = (known after apply) 2026-02-13 02:25:19.215725 | orchestrator | + flavor_id = (known after apply) 2026-02-13 02:25:19.215729 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-13 02:25:19.215733 | orchestrator | + force_delete = false 2026-02-13 02:25:19.215741 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-13 02:25:19.215745 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215748 | orchestrator | + image_id = (known after apply) 2026-02-13 02:25:19.215752 | orchestrator | + image_name = (known after apply) 2026-02-13 02:25:19.215756 | orchestrator | + key_pair = "testbed" 2026-02-13 02:25:19.215760 | orchestrator | + name = "testbed-node-5" 2026-02-13 02:25:19.215763 | orchestrator | + power_state = "active" 2026-02-13 02:25:19.215767 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215771 | orchestrator | + security_groups = (known after apply) 2026-02-13 02:25:19.215775 | orchestrator | + stop_before_destroy = false 2026-02-13 02:25:19.215778 | orchestrator | + updated = (known after apply) 2026-02-13 02:25:19.215782 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-13 02:25:19.215786 | orchestrator | 2026-02-13 02:25:19.215790 | orchestrator | + block_device { 2026-02-13 02:25:19.215793 | orchestrator | + boot_index = 0 2026-02-13 02:25:19.215797 | orchestrator | + delete_on_termination = false 2026-02-13 02:25:19.215801 | orchestrator | + destination_type = "volume" 2026-02-13 02:25:19.215805 | orchestrator | + multiattach = false 2026-02-13 02:25:19.215808 | orchestrator | + source_type = "volume" 2026-02-13 02:25:19.215812 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215816 | orchestrator | } 2026-02-13 02:25:19.215820 | orchestrator | 2026-02-13 02:25:19.215824 | orchestrator | + network { 2026-02-13 02:25:19.215827 | orchestrator | + access_network = false 2026-02-13 02:25:19.215831 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-13 02:25:19.215835 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-13 02:25:19.215839 | orchestrator | + mac = (known after apply) 2026-02-13 02:25:19.215842 | orchestrator | + name = (known after apply) 2026-02-13 02:25:19.215846 | orchestrator | + port = (known after apply) 2026-02-13 02:25:19.215850 | orchestrator | + uuid = (known after apply) 2026-02-13 02:25:19.215854 | orchestrator | } 2026-02-13 02:25:19.215858 | orchestrator | } 2026-02-13 02:25:19.215862 | orchestrator | 2026-02-13 02:25:19.215865 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-13 02:25:19.215869 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-13 02:25:19.215873 | orchestrator | + fingerprint = (known after apply) 2026-02-13 02:25:19.215877 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215881 | orchestrator | + name = "testbed" 2026-02-13 02:25:19.215884 | orchestrator | + private_key = (sensitive value) 2026-02-13 02:25:19.215888 | orchestrator | + public_key = (known after apply) 2026-02-13 02:25:19.215892 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215896 | orchestrator | + user_id = (known after apply) 2026-02-13 02:25:19.215899 | orchestrator | } 2026-02-13 02:25:19.215903 | orchestrator | 2026-02-13 02:25:19.215907 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-13 02:25:19.215911 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.215918 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.215922 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215926 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.215929 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215933 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.215937 | orchestrator | } 2026-02-13 02:25:19.215941 | orchestrator | 2026-02-13 02:25:19.215945 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-13 02:25:19.215948 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.215952 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.215956 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.215982 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.215986 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.215990 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.215994 | orchestrator | } 2026-02-13 02:25:19.215998 | orchestrator | 2026-02-13 02:25:19.216001 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-13 02:25:19.216005 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216009 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216013 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216018 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216024 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216033 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216040 | orchestrator | } 2026-02-13 02:25:19.216045 | orchestrator | 2026-02-13 02:25:19.216051 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-13 02:25:19.216057 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216063 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216070 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216076 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216082 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216088 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216095 | orchestrator | } 2026-02-13 02:25:19.216104 | orchestrator | 2026-02-13 02:25:19.216108 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-13 02:25:19.216112 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216115 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216119 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216123 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216130 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216134 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216138 | orchestrator | } 2026-02-13 02:25:19.216142 | orchestrator | 2026-02-13 02:25:19.216148 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-13 02:25:19.216156 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216165 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216170 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216176 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216183 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216188 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216194 | orchestrator | } 2026-02-13 02:25:19.216200 | orchestrator | 2026-02-13 02:25:19.216206 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-13 02:25:19.216211 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216217 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216223 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216229 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216234 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216246 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216253 | orchestrator | } 2026-02-13 02:25:19.216259 | orchestrator | 2026-02-13 02:25:19.216264 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-13 02:25:19.216268 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216272 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216276 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216281 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216287 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216292 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216298 | orchestrator | } 2026-02-13 02:25:19.216304 | orchestrator | 2026-02-13 02:25:19.216309 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-13 02:25:19.216315 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-13 02:25:19.216321 | orchestrator | + device = (known after apply) 2026-02-13 02:25:19.216326 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216332 | orchestrator | + instance_id = (known after apply) 2026-02-13 02:25:19.216338 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216343 | orchestrator | + volume_id = (known after apply) 2026-02-13 02:25:19.216349 | orchestrator | } 2026-02-13 02:25:19.216354 | orchestrator | 2026-02-13 02:25:19.216359 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-13 02:25:19.216366 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-13 02:25:19.216372 | orchestrator | + fixed_ip = (known after apply) 2026-02-13 02:25:19.216378 | orchestrator | + floating_ip = (known after apply) 2026-02-13 02:25:19.216383 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216389 | orchestrator | + port_id = (known after apply) 2026-02-13 02:25:19.216395 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216500 | orchestrator | } 2026-02-13 02:25:19.216508 | orchestrator | 2026-02-13 02:25:19.216512 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-13 02:25:19.216516 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-13 02:25:19.216520 | orchestrator | + address = (known after apply) 2026-02-13 02:25:19.216524 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.216528 | orchestrator | + dns_domain = (known after apply) 2026-02-13 02:25:19.216532 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.216536 | orchestrator | + fixed_ip = (known after apply) 2026-02-13 02:25:19.216540 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216544 | orchestrator | + pool = "public" 2026-02-13 02:25:19.216550 | orchestrator | + port_id = (known after apply) 2026-02-13 02:25:19.216556 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216561 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.216567 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.216573 | orchestrator | } 2026-02-13 02:25:19.216578 | orchestrator | 2026-02-13 02:25:19.216584 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-13 02:25:19.216590 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-13 02:25:19.216595 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.216602 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.216608 | orchestrator | + availability_zone_hints = [ 2026-02-13 02:25:19.216614 | orchestrator | + "nova", 2026-02-13 02:25:19.216621 | orchestrator | ] 2026-02-13 02:25:19.216627 | orchestrator | + dns_domain = (known after apply) 2026-02-13 02:25:19.216634 | orchestrator | + external = (known after apply) 2026-02-13 02:25:19.216640 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216647 | orchestrator | + mtu = (known after apply) 2026-02-13 02:25:19.216651 | orchestrator | + name = "net-testbed-management" 2026-02-13 02:25:19.216655 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.216665 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.216669 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216673 | orchestrator | + shared = (known after apply) 2026-02-13 02:25:19.216677 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.216681 | orchestrator | + transparent_vlan = (known after apply) 2026-02-13 02:25:19.216685 | orchestrator | 2026-02-13 02:25:19.216688 | orchestrator | + segments (known after apply) 2026-02-13 02:25:19.216692 | orchestrator | } 2026-02-13 02:25:19.216702 | orchestrator | 2026-02-13 02:25:19.216707 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-13 02:25:19.216711 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-13 02:25:19.216715 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.216718 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.216722 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.216731 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.216735 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.216739 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.216742 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.216746 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.216750 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216754 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.216757 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.216761 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.216765 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.216769 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216772 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.216776 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.216780 | orchestrator | 2026-02-13 02:25:19.216784 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.216788 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.216791 | orchestrator | } 2026-02-13 02:25:19.216795 | orchestrator | 2026-02-13 02:25:19.216799 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.216803 | orchestrator | 2026-02-13 02:25:19.216807 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.216811 | orchestrator | + ip_address = "192.168.16.5" 2026-02-13 02:25:19.216814 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.216818 | orchestrator | } 2026-02-13 02:25:19.216822 | orchestrator | } 2026-02-13 02:25:19.216826 | orchestrator | 2026-02-13 02:25:19.216832 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-13 02:25:19.216838 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.216844 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.216850 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.216857 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.216863 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.216870 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.216874 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.216877 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.216881 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.216885 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.216889 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.216895 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.216901 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.216907 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.216913 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.216924 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.216931 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.216937 | orchestrator | 2026-02-13 02:25:19.216943 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.216950 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.216954 | orchestrator | } 2026-02-13 02:25:19.216977 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.216982 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.216986 | orchestrator | } 2026-02-13 02:25:19.216990 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.216994 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.216997 | orchestrator | } 2026-02-13 02:25:19.217001 | orchestrator | 2026-02-13 02:25:19.217005 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217009 | orchestrator | 2026-02-13 02:25:19.217012 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217016 | orchestrator | + ip_address = "192.168.16.10" 2026-02-13 02:25:19.217020 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217023 | orchestrator | } 2026-02-13 02:25:19.217027 | orchestrator | } 2026-02-13 02:25:19.217031 | orchestrator | 2026-02-13 02:25:19.217035 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-13 02:25:19.217038 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.217042 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217046 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.217050 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.217053 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217057 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.217061 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.217065 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.217068 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.217072 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217076 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.217080 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.217084 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.217087 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.217091 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217095 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.217098 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217102 | orchestrator | 2026-02-13 02:25:19.217106 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217110 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.217113 | orchestrator | } 2026-02-13 02:25:19.217117 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217121 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.217124 | orchestrator | } 2026-02-13 02:25:19.217128 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217132 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.217135 | orchestrator | } 2026-02-13 02:25:19.217139 | orchestrator | 2026-02-13 02:25:19.217143 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217146 | orchestrator | 2026-02-13 02:25:19.217150 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217154 | orchestrator | + ip_address = "192.168.16.11" 2026-02-13 02:25:19.217158 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217161 | orchestrator | } 2026-02-13 02:25:19.217165 | orchestrator | } 2026-02-13 02:25:19.217169 | orchestrator | 2026-02-13 02:25:19.217177 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-13 02:25:19.217181 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.217185 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217189 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.217192 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.217196 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217203 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.217207 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.217211 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.217214 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.217221 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217225 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.217229 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.217232 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.217236 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.217240 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217244 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.217247 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217251 | orchestrator | 2026-02-13 02:25:19.217255 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217259 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.217262 | orchestrator | } 2026-02-13 02:25:19.217266 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217270 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.217274 | orchestrator | } 2026-02-13 02:25:19.217278 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217281 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.217285 | orchestrator | } 2026-02-13 02:25:19.217289 | orchestrator | 2026-02-13 02:25:19.217293 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217296 | orchestrator | 2026-02-13 02:25:19.217300 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217304 | orchestrator | + ip_address = "192.168.16.12" 2026-02-13 02:25:19.217308 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217311 | orchestrator | } 2026-02-13 02:25:19.217315 | orchestrator | } 2026-02-13 02:25:19.217319 | orchestrator | 2026-02-13 02:25:19.217323 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-13 02:25:19.217327 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.217330 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217334 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.217338 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.217342 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217345 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.217349 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.217353 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.217357 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.217360 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217364 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.217368 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.217372 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.217375 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.217379 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217383 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.217386 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217390 | orchestrator | 2026-02-13 02:25:19.217394 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217398 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.217402 | orchestrator | } 2026-02-13 02:25:19.217405 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217409 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.217413 | orchestrator | } 2026-02-13 02:25:19.217417 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217420 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.217424 | orchestrator | } 2026-02-13 02:25:19.217428 | orchestrator | 2026-02-13 02:25:19.217435 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217438 | orchestrator | 2026-02-13 02:25:19.217442 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217446 | orchestrator | + ip_address = "192.168.16.13" 2026-02-13 02:25:19.217450 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217453 | orchestrator | } 2026-02-13 02:25:19.217457 | orchestrator | } 2026-02-13 02:25:19.217461 | orchestrator | 2026-02-13 02:25:19.217465 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-13 02:25:19.217469 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.217472 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217476 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.217480 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.217484 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217487 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.217491 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.217495 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.217499 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.217502 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217506 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.217510 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.217513 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.217517 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.217521 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217525 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.217529 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217534 | orchestrator | 2026-02-13 02:25:19.217599 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217604 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.217608 | orchestrator | } 2026-02-13 02:25:19.217612 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217616 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.217620 | orchestrator | } 2026-02-13 02:25:19.217624 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217627 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.217631 | orchestrator | } 2026-02-13 02:25:19.217635 | orchestrator | 2026-02-13 02:25:19.217639 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217643 | orchestrator | 2026-02-13 02:25:19.217646 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217654 | orchestrator | + ip_address = "192.168.16.14" 2026-02-13 02:25:19.217658 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217662 | orchestrator | } 2026-02-13 02:25:19.217665 | orchestrator | } 2026-02-13 02:25:19.217669 | orchestrator | 2026-02-13 02:25:19.217673 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-13 02:25:19.217677 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-13 02:25:19.217680 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217684 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-13 02:25:19.217688 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-13 02:25:19.217692 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217698 | orchestrator | + device_id = (known after apply) 2026-02-13 02:25:19.217705 | orchestrator | + device_owner = (known after apply) 2026-02-13 02:25:19.217711 | orchestrator | + dns_assignment = (known after apply) 2026-02-13 02:25:19.217717 | orchestrator | + dns_name = (known after apply) 2026-02-13 02:25:19.217724 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217730 | orchestrator | + mac_address = (known after apply) 2026-02-13 02:25:19.217737 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.217744 | orchestrator | + port_security_enabled = (known after apply) 2026-02-13 02:25:19.217751 | orchestrator | + qos_policy_id = (known after apply) 2026-02-13 02:25:19.217764 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217768 | orchestrator | + security_group_ids = (known after apply) 2026-02-13 02:25:19.217772 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217776 | orchestrator | 2026-02-13 02:25:19.217780 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217783 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-13 02:25:19.217787 | orchestrator | } 2026-02-13 02:25:19.217791 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217795 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-13 02:25:19.217798 | orchestrator | } 2026-02-13 02:25:19.217802 | orchestrator | + allowed_address_pairs { 2026-02-13 02:25:19.217806 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-13 02:25:19.217810 | orchestrator | } 2026-02-13 02:25:19.217814 | orchestrator | 2026-02-13 02:25:19.217821 | orchestrator | + binding (known after apply) 2026-02-13 02:25:19.217825 | orchestrator | 2026-02-13 02:25:19.217828 | orchestrator | + fixed_ip { 2026-02-13 02:25:19.217832 | orchestrator | + ip_address = "192.168.16.15" 2026-02-13 02:25:19.217836 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217839 | orchestrator | } 2026-02-13 02:25:19.217845 | orchestrator | } 2026-02-13 02:25:19.217851 | orchestrator | 2026-02-13 02:25:19.217858 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-13 02:25:19.217864 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-13 02:25:19.217870 | orchestrator | + force_destroy = false 2026-02-13 02:25:19.217875 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217882 | orchestrator | + port_id = (known after apply) 2026-02-13 02:25:19.217887 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217893 | orchestrator | + router_id = (known after apply) 2026-02-13 02:25:19.217899 | orchestrator | + subnet_id = (known after apply) 2026-02-13 02:25:19.217905 | orchestrator | } 2026-02-13 02:25:19.217908 | orchestrator | 2026-02-13 02:25:19.217912 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-13 02:25:19.217916 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-13 02:25:19.217920 | orchestrator | + admin_state_up = (known after apply) 2026-02-13 02:25:19.217923 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.217927 | orchestrator | + availability_zone_hints = [ 2026-02-13 02:25:19.217931 | orchestrator | + "nova", 2026-02-13 02:25:19.217935 | orchestrator | ] 2026-02-13 02:25:19.217938 | orchestrator | + distributed = (known after apply) 2026-02-13 02:25:19.217942 | orchestrator | + enable_snat = (known after apply) 2026-02-13 02:25:19.217946 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-13 02:25:19.217950 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-13 02:25:19.217953 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.217957 | orchestrator | + name = "testbed" 2026-02-13 02:25:19.217985 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.217989 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.217993 | orchestrator | 2026-02-13 02:25:19.217996 | orchestrator | + external_fixed_ip (known after apply) 2026-02-13 02:25:19.218000 | orchestrator | } 2026-02-13 02:25:19.218004 | orchestrator | 2026-02-13 02:25:19.218008 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-13 02:25:19.218012 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-13 02:25:19.218030 | orchestrator | + description = "ssh" 2026-02-13 02:25:19.218034 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218037 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218041 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218045 | orchestrator | + port_range_max = 22 2026-02-13 02:25:19.218049 | orchestrator | + port_range_min = 22 2026-02-13 02:25:19.218053 | orchestrator | + protocol = "tcp" 2026-02-13 02:25:19.218056 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218067 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218076 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218080 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218084 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218088 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218091 | orchestrator | } 2026-02-13 02:25:19.218095 | orchestrator | 2026-02-13 02:25:19.218099 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-13 02:25:19.218103 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-13 02:25:19.218107 | orchestrator | + description = "wireguard" 2026-02-13 02:25:19.218110 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218114 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218118 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218122 | orchestrator | + port_range_max = 51820 2026-02-13 02:25:19.218125 | orchestrator | + port_range_min = 51820 2026-02-13 02:25:19.218129 | orchestrator | + protocol = "udp" 2026-02-13 02:25:19.218133 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218137 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218148 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218152 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218156 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218160 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218163 | orchestrator | } 2026-02-13 02:25:19.218167 | orchestrator | 2026-02-13 02:25:19.218171 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-13 02:25:19.218175 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-13 02:25:19.218181 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218188 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218194 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218201 | orchestrator | + protocol = "tcp" 2026-02-13 02:25:19.218208 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218214 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218220 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218227 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-13 02:25:19.218231 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218235 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218239 | orchestrator | } 2026-02-13 02:25:19.218243 | orchestrator | 2026-02-13 02:25:19.218247 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-13 02:25:19.218250 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-13 02:25:19.218254 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218258 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218262 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218266 | orchestrator | + protocol = "udp" 2026-02-13 02:25:19.218269 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218273 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218277 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218281 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-13 02:25:19.218284 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218288 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218292 | orchestrator | } 2026-02-13 02:25:19.218296 | orchestrator | 2026-02-13 02:25:19.218302 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-13 02:25:19.218313 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-13 02:25:19.218320 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218326 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218333 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218339 | orchestrator | + protocol = "icmp" 2026-02-13 02:25:19.218345 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218350 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218354 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218358 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218362 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218365 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218369 | orchestrator | } 2026-02-13 02:25:19.218373 | orchestrator | 2026-02-13 02:25:19.218376 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-13 02:25:19.218380 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-13 02:25:19.218384 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218388 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218391 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218395 | orchestrator | + protocol = "tcp" 2026-02-13 02:25:19.218399 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218403 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218410 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218414 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218417 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218421 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218425 | orchestrator | } 2026-02-13 02:25:19.218429 | orchestrator | 2026-02-13 02:25:19.218432 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-13 02:25:19.218436 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-13 02:25:19.218440 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218444 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218447 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218451 | orchestrator | + protocol = "udp" 2026-02-13 02:25:19.218455 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218458 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218462 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218466 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218470 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218474 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218477 | orchestrator | } 2026-02-13 02:25:19.218483 | orchestrator | 2026-02-13 02:25:19.218487 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-13 02:25:19.218491 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-13 02:25:19.218495 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218501 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218505 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218509 | orchestrator | + protocol = "icmp" 2026-02-13 02:25:19.218513 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218516 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218520 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218524 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218528 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218531 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218540 | orchestrator | } 2026-02-13 02:25:19.218543 | orchestrator | 2026-02-13 02:25:19.218547 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-13 02:25:19.218551 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-13 02:25:19.218555 | orchestrator | + description = "vrrp" 2026-02-13 02:25:19.218559 | orchestrator | + direction = "ingress" 2026-02-13 02:25:19.218562 | orchestrator | + ethertype = "IPv4" 2026-02-13 02:25:19.218566 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218570 | orchestrator | + protocol = "112" 2026-02-13 02:25:19.218574 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218577 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-13 02:25:19.218581 | orchestrator | + remote_group_id = (known after apply) 2026-02-13 02:25:19.218585 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-13 02:25:19.218589 | orchestrator | + security_group_id = (known after apply) 2026-02-13 02:25:19.218592 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218596 | orchestrator | } 2026-02-13 02:25:19.218600 | orchestrator | 2026-02-13 02:25:19.218604 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-13 02:25:19.218608 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-13 02:25:19.218612 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.218615 | orchestrator | + description = "management security group" 2026-02-13 02:25:19.218619 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218623 | orchestrator | + name = "testbed-management" 2026-02-13 02:25:19.218626 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218630 | orchestrator | + stateful = (known after apply) 2026-02-13 02:25:19.218634 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218638 | orchestrator | } 2026-02-13 02:25:19.218704 | orchestrator | 2026-02-13 02:25:19.218710 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-13 02:25:19.218714 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-13 02:25:19.218718 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.218721 | orchestrator | + description = "node security group" 2026-02-13 02:25:19.218725 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218729 | orchestrator | + name = "testbed-node" 2026-02-13 02:25:19.218733 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218737 | orchestrator | + stateful = (known after apply) 2026-02-13 02:25:19.218740 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218744 | orchestrator | } 2026-02-13 02:25:19.218750 | orchestrator | 2026-02-13 02:25:19.218754 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-13 02:25:19.218758 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-13 02:25:19.218761 | orchestrator | + all_tags = (known after apply) 2026-02-13 02:25:19.218765 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-13 02:25:19.218769 | orchestrator | + dns_nameservers = [ 2026-02-13 02:25:19.218773 | orchestrator | + "8.8.8.8", 2026-02-13 02:25:19.218777 | orchestrator | + "9.9.9.9", 2026-02-13 02:25:19.218781 | orchestrator | ] 2026-02-13 02:25:19.218785 | orchestrator | + enable_dhcp = true 2026-02-13 02:25:19.218788 | orchestrator | + gateway_ip = (known after apply) 2026-02-13 02:25:19.218793 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218799 | orchestrator | + ip_version = 4 2026-02-13 02:25:19.218806 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-13 02:25:19.218813 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-13 02:25:19.218822 | orchestrator | + name = "subnet-testbed-management" 2026-02-13 02:25:19.218828 | orchestrator | + network_id = (known after apply) 2026-02-13 02:25:19.218834 | orchestrator | + no_gateway = false 2026-02-13 02:25:19.218840 | orchestrator | + region = (known after apply) 2026-02-13 02:25:19.218845 | orchestrator | + service_types = (known after apply) 2026-02-13 02:25:19.218857 | orchestrator | + tenant_id = (known after apply) 2026-02-13 02:25:19.218863 | orchestrator | 2026-02-13 02:25:19.218868 | orchestrator | + allocation_pool { 2026-02-13 02:25:19.218875 | orchestrator | + end = "192.168.31.250" 2026-02-13 02:25:19.218880 | orchestrator | + start = "192.168.31.200" 2026-02-13 02:25:19.218886 | orchestrator | } 2026-02-13 02:25:19.218891 | orchestrator | } 2026-02-13 02:25:19.218897 | orchestrator | 2026-02-13 02:25:19.218904 | orchestrator | # terraform_data.image will be created 2026-02-13 02:25:19.218910 | orchestrator | + resource "terraform_data" "image" { 2026-02-13 02:25:19.218916 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218922 | orchestrator | + input = "Ubuntu 24.04" 2026-02-13 02:25:19.218928 | orchestrator | + output = (known after apply) 2026-02-13 02:25:19.218934 | orchestrator | } 2026-02-13 02:25:19.218940 | orchestrator | 2026-02-13 02:25:19.218946 | orchestrator | # terraform_data.image_node will be created 2026-02-13 02:25:19.218952 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-13 02:25:19.218981 | orchestrator | + id = (known after apply) 2026-02-13 02:25:19.218986 | orchestrator | + input = "Ubuntu 24.04" 2026-02-13 02:25:19.218990 | orchestrator | + output = (known after apply) 2026-02-13 02:25:19.218994 | orchestrator | } 2026-02-13 02:25:19.218997 | orchestrator | 2026-02-13 02:25:19.219001 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-13 02:25:19.219005 | orchestrator | 2026-02-13 02:25:19.219009 | orchestrator | Changes to Outputs: 2026-02-13 02:25:19.219013 | orchestrator | + manager_address = (sensitive value) 2026-02-13 02:25:19.219016 | orchestrator | + private_key = (sensitive value) 2026-02-13 02:25:19.441271 | orchestrator | terraform_data.image_node: Creating... 2026-02-13 02:25:19.441337 | orchestrator | terraform_data.image: Creating... 2026-02-13 02:25:19.441749 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=bf9364d6-477c-af27-a762-e2c13c5abad1] 2026-02-13 02:25:19.442303 | orchestrator | terraform_data.image: Creation complete after 0s [id=d20230c5-0300-2239-7d70-061a8ab173e7] 2026-02-13 02:25:19.455330 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-13 02:25:19.459233 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-13 02:25:19.463063 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-13 02:25:19.467044 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-13 02:25:19.467166 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-13 02:25:19.470443 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-13 02:25:19.472047 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-13 02:25:19.473771 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-13 02:25:19.474732 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-13 02:25:19.479783 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-13 02:25:19.946565 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-13 02:25:19.954083 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-13 02:25:19.969112 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-13 02:25:19.978322 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-13 02:25:19.984902 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-13 02:25:19.989235 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-13 02:25:20.943269 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 2s [id=fb6dff2b-d33e-46e2-9488-ae72c36679dc] 2026-02-13 02:25:20.953951 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-13 02:25:23.095827 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=a697f046-4fd0-4ab4-8d74-c390a778d322] 2026-02-13 02:25:23.105366 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-13 02:25:23.123583 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=328f169c-733e-4f14-823b-87aac3d7f788] 2026-02-13 02:25:23.135648 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-13 02:25:23.141933 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=5b26d7d0-a0c8-4c7f-bd9d-e63316d26460] 2026-02-13 02:25:23.142457 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=2816f11f37eee9854caa1e40429b1d8c84393cf0] 2026-02-13 02:25:23.145285 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=48ecca72-7ee3-4b3a-9d71-2cc28b178165] 2026-02-13 02:25:23.149681 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-13 02:25:23.151157 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-13 02:25:23.151596 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-13 02:25:23.164628 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=e8d0143b-93aa-4fea-9af4-d1456432661e] 2026-02-13 02:25:23.172656 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-13 02:25:23.185932 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=4e1fd529-f92d-4aae-9efe-84acf01c9226] 2026-02-13 02:25:23.191417 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-13 02:25:23.212868 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=a2cf23bc-7fe2-4567-b5c7-4e51efed82f3] 2026-02-13 02:25:23.222082 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-13 02:25:23.226604 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=8f209ce1225ef293b53fd8726d8e376e085986be] 2026-02-13 02:25:23.229770 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=53853b9a-f5c7-4285-928f-a8aa60d7202d] 2026-02-13 02:25:23.241672 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-13 02:25:23.241758 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=848b7966-1abc-45c8-bb4e-7a18a2718e52] 2026-02-13 02:25:24.082312 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=f5e02c91-159c-4e9a-9a37-31f8fec240f1] 2026-02-13 02:25:24.091248 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-13 02:25:24.285509 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=91f5b10e-f3e3-4ebd-b719-1fd016e5b677] 2026-02-13 02:25:26.454399 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=70bc5ce7-ef2b-48d3-8c75-27accd01fe36] 2026-02-13 02:25:26.521194 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=d82ec97d-f827-4100-86b5-d0feadaf576d] 2026-02-13 02:25:26.550899 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=8816e0be-b769-4c64-9a1e-16e9d78e3106] 2026-02-13 02:25:26.563739 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=fd8b8514-7874-426e-a54e-5d908caa4a6d] 2026-02-13 02:25:26.588358 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=1e7782c1-d478-46d9-a0ec-d13f1d0cd82b] 2026-02-13 02:25:26.610787 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=e6ae2313-edff-4f38-a15e-e73833441a0d] 2026-02-13 02:25:26.710112 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=98d43bd9-dfef-4d68-8680-90486bfde996] 2026-02-13 02:25:26.719006 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-13 02:25:26.721247 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-13 02:25:26.721324 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-13 02:25:26.919194 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=a48d2649-2a65-453d-aac5-1d7b99a5d648] 2026-02-13 02:25:26.935924 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-13 02:25:26.937055 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-13 02:25:26.945418 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-13 02:25:26.945501 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-13 02:25:26.947306 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-13 02:25:26.947453 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-13 02:25:26.948251 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-13 02:25:26.949479 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=755a095f-9414-4fba-bdd6-e91271d0aef4] 2026-02-13 02:25:26.956188 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-13 02:25:26.960084 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-13 02:25:27.126194 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=e25802b7-622a-4ec0-a650-b007d3970954] 2026-02-13 02:25:27.140631 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-13 02:25:27.326985 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=d347fd38-e2f2-4f1a-98aa-df17cc4c40de] 2026-02-13 02:25:27.336455 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-13 02:25:27.474329 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=ea12f5ca-1eb9-4bbb-a942-c8d1fc28bf80] 2026-02-13 02:25:27.479672 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-13 02:25:27.534476 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=b45d947c-ad1b-41cf-acec-c3f10d073339] 2026-02-13 02:25:27.543774 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-13 02:25:27.565981 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=4ca7ee62-cbbe-441b-a2d1-be73f4c3616a] 2026-02-13 02:25:27.585554 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-13 02:25:27.665943 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=59761959-695c-408a-8962-1fcb720815ee] 2026-02-13 02:25:27.677881 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=e62c53e5-cfaa-42bf-ab12-191a8b395bca] 2026-02-13 02:25:27.679175 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-13 02:25:27.682746 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-13 02:25:27.697759 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=a0efab77-473d-460e-8b46-34940026d99d] 2026-02-13 02:25:27.710267 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=9f1fb266-26c7-42f0-8554-83efa1cc2bbb] 2026-02-13 02:25:27.733799 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=50d7a29e-fb07-42b7-a7b4-e9572940fad6] 2026-02-13 02:25:27.745667 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ee544db9-64f8-4aba-890f-73444b081489] 2026-02-13 02:25:27.826665 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=cb59f827-21d3-4f0c-a04a-aecc886bc07f] 2026-02-13 02:25:27.860639 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=e241678a-85d9-4cb9-aefd-1118530f25be] 2026-02-13 02:25:28.056486 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=af3a5b0c-7b25-4fe2-89a4-6526859532ae] 2026-02-13 02:25:28.199620 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=48b023d0-e069-46cd-93a2-1f69ba174b9e] 2026-02-13 02:25:28.501466 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=11e74f0f-eb20-4fb9-983c-e58b4ff913e2] 2026-02-13 02:25:28.988430 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=06188eb4-bdbf-4d0b-84b5-6503676a1cac] 2026-02-13 02:25:28.996585 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-13 02:25:29.017773 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-13 02:25:29.021671 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-13 02:25:29.022085 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-13 02:25:29.023527 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-13 02:25:29.031067 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-13 02:25:29.033495 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-13 02:25:30.360320 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=2ffb4949-dc90-413f-ba56-805ca0c110b8] 2026-02-13 02:25:30.369368 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-13 02:25:30.379337 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-13 02:25:30.385969 | orchestrator | local_file.inventory: Creating... 2026-02-13 02:25:30.386589 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=52c2e56f981542713c4334c1cb7b5ba45f373cde] 2026-02-13 02:25:30.390976 | orchestrator | local_file.inventory: Creation complete after 0s [id=81819e6834739a78d6410e96fda5cc8bcfada749] 2026-02-13 02:25:31.739931 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=2ffb4949-dc90-413f-ba56-805ca0c110b8] 2026-02-13 02:25:39.023464 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-13 02:25:39.024793 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-13 02:25:39.028254 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-13 02:25:39.030346 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-13 02:25:39.036649 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-13 02:25:39.036762 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-13 02:25:49.024510 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-13 02:25:49.025496 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-13 02:25:49.028874 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-13 02:25:49.030996 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-13 02:25:49.037372 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-13 02:25:49.037457 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-13 02:25:49.423543 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 20s [id=308ae992-defe-45a6-8356-5ea8cd30aeac] 2026-02-13 02:25:49.521585 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 21s [id=a8babb1b-39b1-4997-b9fa-f1553d761014] 2026-02-13 02:25:49.550033 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=5a104098-955c-4e6a-a89e-cd67f52ba98e] 2026-02-13 02:25:59.029502 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-02-13 02:25:59.031671 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-13 02:25:59.038074 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-02-13 02:25:59.780435 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=58b6c9e8-fc11-4304-b8ae-ba02b4cf3ffb] 2026-02-13 02:25:59.790797 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=7d28fafd-e248-46eb-9a57-6912befbd918] 2026-02-13 02:25:59.805283 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=5d1efa0e-bce3-4656-a139-9f6441025117] 2026-02-13 02:25:59.812206 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-13 02:25:59.833592 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=7717700867365679567] 2026-02-13 02:25:59.839492 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-13 02:25:59.841454 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-13 02:25:59.841891 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-13 02:25:59.845221 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-13 02:25:59.849575 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-13 02:25:59.853182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-13 02:25:59.854334 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-13 02:25:59.857126 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-13 02:25:59.858093 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-13 02:25:59.873458 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-13 02:26:03.205655 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=58b6c9e8-fc11-4304-b8ae-ba02b4cf3ffb/4e1fd529-f92d-4aae-9efe-84acf01c9226] 2026-02-13 02:26:03.235176 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 3s [id=308ae992-defe-45a6-8356-5ea8cd30aeac/53853b9a-f5c7-4285-928f-a8aa60d7202d] 2026-02-13 02:26:03.261206 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=58b6c9e8-fc11-4304-b8ae-ba02b4cf3ffb/a697f046-4fd0-4ab4-8d74-c390a778d322] 2026-02-13 02:26:03.293591 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=7d28fafd-e248-46eb-9a57-6912befbd918/5b26d7d0-a0c8-4c7f-bd9d-e63316d26460] 2026-02-13 02:26:03.316009 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=308ae992-defe-45a6-8356-5ea8cd30aeac/a2cf23bc-7fe2-4567-b5c7-4e51efed82f3] 2026-02-13 02:26:03.364174 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=7d28fafd-e248-46eb-9a57-6912befbd918/848b7966-1abc-45c8-bb4e-7a18a2718e52] 2026-02-13 02:26:09.391033 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=58b6c9e8-fc11-4304-b8ae-ba02b4cf3ffb/48ecca72-7ee3-4b3a-9d71-2cc28b178165] 2026-02-13 02:26:09.402288 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 9s [id=308ae992-defe-45a6-8356-5ea8cd30aeac/e8d0143b-93aa-4fea-9af4-d1456432661e] 2026-02-13 02:26:09.534203 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 10s [id=7d28fafd-e248-46eb-9a57-6912befbd918/328f169c-733e-4f14-823b-87aac3d7f788] 2026-02-13 02:26:09.874985 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-13 02:26:19.876254 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-13 02:26:20.205282 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=54635d4c-df9f-4cbf-b07b-f992ae0c9b98] 2026-02-13 02:26:20.221016 | orchestrator | 2026-02-13 02:26:20.221107 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-13 02:26:20.221118 | orchestrator | 2026-02-13 02:26:20.221125 | orchestrator | Outputs: 2026-02-13 02:26:20.221133 | orchestrator | 2026-02-13 02:26:20.221150 | orchestrator | manager_address = 2026-02-13 02:26:20.221157 | orchestrator | private_key = 2026-02-13 02:26:20.307145 | orchestrator | ok: Runtime: 0:01:08.083339 2026-02-13 02:26:20.327014 | 2026-02-13 02:26:20.327163 | TASK [Fetch manager address] 2026-02-13 02:26:20.833137 | orchestrator | ok 2026-02-13 02:26:20.845124 | 2026-02-13 02:26:20.845271 | TASK [Set manager_host address] 2026-02-13 02:26:20.925300 | orchestrator | ok 2026-02-13 02:26:20.935387 | 2026-02-13 02:26:20.935520 | LOOP [Update ansible collections] 2026-02-13 02:26:23.764100 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-13 02:26:23.764449 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-13 02:26:23.764525 | orchestrator | Starting galaxy collection install process 2026-02-13 02:26:23.764567 | orchestrator | Process install dependency map 2026-02-13 02:26:23.764604 | orchestrator | Starting collection install process 2026-02-13 02:26:23.764638 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-13 02:26:23.764676 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-13 02:26:23.764717 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-13 02:26:23.764787 | orchestrator | ok: Item: commons Runtime: 0:00:02.503057 2026-02-13 02:26:24.743452 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-13 02:26:24.743629 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-13 02:26:24.743692 | orchestrator | Starting galaxy collection install process 2026-02-13 02:26:24.743740 | orchestrator | Process install dependency map 2026-02-13 02:26:24.743785 | orchestrator | Starting collection install process 2026-02-13 02:26:24.743825 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-13 02:26:24.743867 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-13 02:26:24.743907 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-13 02:26:24.743968 | orchestrator | ok: Item: services Runtime: 0:00:00.674050 2026-02-13 02:26:24.767315 | 2026-02-13 02:26:24.767489 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-13 02:26:35.374993 | orchestrator | ok 2026-02-13 02:26:35.386162 | 2026-02-13 02:26:35.386294 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-13 02:27:35.434310 | orchestrator | ok 2026-02-13 02:27:35.445289 | 2026-02-13 02:27:35.445412 | TASK [Fetch manager ssh hostkey] 2026-02-13 02:27:37.021204 | orchestrator | Output suppressed because no_log was given 2026-02-13 02:27:37.036782 | 2026-02-13 02:27:37.036954 | TASK [Get ssh keypair from terraform environment] 2026-02-13 02:27:37.572985 | orchestrator | ok: Runtime: 0:00:00.007750 2026-02-13 02:27:37.589046 | 2026-02-13 02:27:37.589286 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-13 02:27:37.626420 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-13 02:27:37.635768 | 2026-02-13 02:27:37.635901 | TASK [Run manager part 0] 2026-02-13 02:27:39.199557 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-13 02:27:39.368703 | orchestrator | 2026-02-13 02:27:39.368803 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-13 02:27:39.368821 | orchestrator | 2026-02-13 02:27:39.368848 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-13 02:27:40.973216 | orchestrator | ok: [testbed-manager] 2026-02-13 02:27:40.973289 | orchestrator | 2026-02-13 02:27:40.973317 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-13 02:27:40.973331 | orchestrator | 2026-02-13 02:27:40.973343 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:27:42.933813 | orchestrator | ok: [testbed-manager] 2026-02-13 02:27:42.933887 | orchestrator | 2026-02-13 02:27:42.933900 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-13 02:27:43.599226 | orchestrator | ok: [testbed-manager] 2026-02-13 02:27:43.599295 | orchestrator | 2026-02-13 02:27:43.599304 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-13 02:27:43.650347 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.650398 | orchestrator | 2026-02-13 02:27:43.650407 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-13 02:27:43.677365 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.677532 | orchestrator | 2026-02-13 02:27:43.677559 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-13 02:27:43.719892 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.719992 | orchestrator | 2026-02-13 02:27:43.720006 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-13 02:27:43.759346 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.759395 | orchestrator | 2026-02-13 02:27:43.759401 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-13 02:27:43.801152 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.801229 | orchestrator | 2026-02-13 02:27:43.801239 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-13 02:27:43.847685 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.847760 | orchestrator | 2026-02-13 02:27:43.847772 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-13 02:27:43.885290 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:27:43.885352 | orchestrator | 2026-02-13 02:27:43.885361 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-13 02:27:44.660340 | orchestrator | changed: [testbed-manager] 2026-02-13 02:27:44.660396 | orchestrator | 2026-02-13 02:27:44.660403 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-13 02:30:16.989209 | orchestrator | changed: [testbed-manager] 2026-02-13 02:30:16.989279 | orchestrator | 2026-02-13 02:30:16.989298 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-13 02:31:35.104379 | orchestrator | changed: [testbed-manager] 2026-02-13 02:31:35.104499 | orchestrator | 2026-02-13 02:31:35.104526 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-13 02:31:53.981100 | orchestrator | changed: [testbed-manager] 2026-02-13 02:31:53.981145 | orchestrator | 2026-02-13 02:31:53.981155 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-13 02:32:02.068917 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:02.069013 | orchestrator | 2026-02-13 02:32:02.069029 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-13 02:32:02.128090 | orchestrator | ok: [testbed-manager] 2026-02-13 02:32:02.128185 | orchestrator | 2026-02-13 02:32:02.128200 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-13 02:32:02.919944 | orchestrator | ok: [testbed-manager] 2026-02-13 02:32:02.920038 | orchestrator | 2026-02-13 02:32:02.920062 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-13 02:32:03.648562 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:03.648654 | orchestrator | 2026-02-13 02:32:03.648672 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-13 02:32:09.434321 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:09.434497 | orchestrator | 2026-02-13 02:32:09.434536 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-13 02:32:14.849335 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:14.849432 | orchestrator | 2026-02-13 02:32:14.849452 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-13 02:32:17.412811 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:17.412910 | orchestrator | 2026-02-13 02:32:17.412926 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-13 02:32:19.208066 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:19.208161 | orchestrator | 2026-02-13 02:32:19.208178 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-13 02:32:20.332413 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-13 02:32:20.332502 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-13 02:32:20.332517 | orchestrator | 2026-02-13 02:32:20.332530 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-13 02:32:20.381339 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-13 02:32:20.381400 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-13 02:32:20.381409 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-13 02:32:20.381417 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-13 02:32:30.498443 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-13 02:32:30.498498 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-13 02:32:30.498508 | orchestrator | 2026-02-13 02:32:30.498517 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-13 02:32:31.047457 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:31.047497 | orchestrator | 2026-02-13 02:32:31.047506 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-13 02:32:50.764068 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-13 02:32:50.764180 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-13 02:32:50.764200 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-13 02:32:50.764213 | orchestrator | 2026-02-13 02:32:50.764226 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-13 02:32:52.991441 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-13 02:32:52.992120 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-13 02:32:52.992140 | orchestrator | 2026-02-13 02:32:52.992146 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-13 02:32:52.992151 | orchestrator | 2026-02-13 02:32:52.992155 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:32:54.354153 | orchestrator | ok: [testbed-manager] 2026-02-13 02:32:54.354239 | orchestrator | 2026-02-13 02:32:54.354258 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-13 02:32:54.402679 | orchestrator | ok: [testbed-manager] 2026-02-13 02:32:54.402766 | orchestrator | 2026-02-13 02:32:54.402781 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-13 02:32:54.472111 | orchestrator | ok: [testbed-manager] 2026-02-13 02:32:54.472200 | orchestrator | 2026-02-13 02:32:54.472216 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-13 02:32:55.232212 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:55.232266 | orchestrator | 2026-02-13 02:32:55.232275 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-13 02:32:55.921540 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:55.921631 | orchestrator | 2026-02-13 02:32:55.921647 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-13 02:32:57.248903 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-13 02:32:57.249003 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-13 02:32:57.249020 | orchestrator | 2026-02-13 02:32:57.249047 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-13 02:32:58.602650 | orchestrator | changed: [testbed-manager] 2026-02-13 02:32:58.602806 | orchestrator | 2026-02-13 02:32:58.602829 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-13 02:33:00.518774 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:33:00.518809 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-13 02:33:00.518814 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:33:00.518818 | orchestrator | 2026-02-13 02:33:00.518824 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-13 02:33:00.578071 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:00.578115 | orchestrator | 2026-02-13 02:33:00.578123 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-13 02:33:00.654912 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:00.654965 | orchestrator | 2026-02-13 02:33:00.654972 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-13 02:33:01.248922 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:01.248969 | orchestrator | 2026-02-13 02:33:01.248980 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-13 02:33:01.321312 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:01.321376 | orchestrator | 2026-02-13 02:33:01.321386 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-13 02:33:02.279704 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:33:02.279793 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:02.279809 | orchestrator | 2026-02-13 02:33:02.279822 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-13 02:33:02.327305 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:02.327404 | orchestrator | 2026-02-13 02:33:02.327418 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-13 02:33:02.363662 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:02.363723 | orchestrator | 2026-02-13 02:33:02.363732 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-13 02:33:02.396498 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:02.396552 | orchestrator | 2026-02-13 02:33:02.396560 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-13 02:33:02.465110 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:02.465165 | orchestrator | 2026-02-13 02:33:02.465170 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-13 02:33:03.200691 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:03.200917 | orchestrator | 2026-02-13 02:33:03.200939 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-13 02:33:03.200952 | orchestrator | 2026-02-13 02:33:03.200964 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:33:04.659191 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:04.659270 | orchestrator | 2026-02-13 02:33:04.659283 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-13 02:33:05.677496 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:05.677543 | orchestrator | 2026-02-13 02:33:05.677672 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:33:05.677689 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-13 02:33:05.677697 | orchestrator | 2026-02-13 02:33:05.889213 | orchestrator | ok: Runtime: 0:05:27.843349 2026-02-13 02:33:05.905957 | 2026-02-13 02:33:05.906087 | TASK [Point out that the log in on the manager is now possible] 2026-02-13 02:33:05.945743 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-13 02:33:05.956378 | 2026-02-13 02:33:05.956507 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-13 02:33:06.004854 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-13 02:33:06.014791 | 2026-02-13 02:33:06.014957 | TASK [Run manager part 1 + 2] 2026-02-13 02:33:06.925211 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-13 02:33:06.980869 | orchestrator | 2026-02-13 02:33:06.981009 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-13 02:33:06.981029 | orchestrator | 2026-02-13 02:33:06.981061 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:33:09.512437 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:09.512541 | orchestrator | 2026-02-13 02:33:09.512625 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-13 02:33:09.555598 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:09.555672 | orchestrator | 2026-02-13 02:33:09.555689 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-13 02:33:09.597562 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:09.597621 | orchestrator | 2026-02-13 02:33:09.597629 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-13 02:33:09.640670 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:09.640726 | orchestrator | 2026-02-13 02:33:09.640736 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-13 02:33:09.708901 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:09.708957 | orchestrator | 2026-02-13 02:33:09.708965 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-13 02:33:09.771618 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:09.771677 | orchestrator | 2026-02-13 02:33:09.771685 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-13 02:33:09.812945 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-13 02:33:09.813044 | orchestrator | 2026-02-13 02:33:09.813061 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-13 02:33:10.591552 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:10.591632 | orchestrator | 2026-02-13 02:33:10.591645 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-13 02:33:10.641541 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:10.641636 | orchestrator | 2026-02-13 02:33:10.641652 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-13 02:33:12.005868 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:12.005975 | orchestrator | 2026-02-13 02:33:12.005985 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-13 02:33:12.591164 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:12.591256 | orchestrator | 2026-02-13 02:33:12.591272 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-13 02:33:13.786915 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:13.787027 | orchestrator | 2026-02-13 02:33:13.787057 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-13 02:33:28.004497 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:28.004562 | orchestrator | 2026-02-13 02:33:28.004569 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-13 02:33:28.685255 | orchestrator | ok: [testbed-manager] 2026-02-13 02:33:28.685289 | orchestrator | 2026-02-13 02:33:28.685297 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-13 02:33:28.736969 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:28.737009 | orchestrator | 2026-02-13 02:33:28.737017 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-13 02:33:29.693105 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:29.693141 | orchestrator | 2026-02-13 02:33:29.693148 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-13 02:33:30.629975 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:30.630080 | orchestrator | 2026-02-13 02:33:30.630090 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-13 02:33:31.184380 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:31.184423 | orchestrator | 2026-02-13 02:33:31.184456 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-13 02:33:31.225133 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-13 02:33:31.225240 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-13 02:33:31.225256 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-13 02:33:31.225269 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-13 02:33:35.173746 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:35.173823 | orchestrator | 2026-02-13 02:33:35.173837 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-13 02:33:43.786618 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-13 02:33:43.786734 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-13 02:33:43.786752 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-13 02:33:43.786765 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-13 02:33:43.786786 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-13 02:33:43.786797 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-13 02:33:43.786808 | orchestrator | 2026-02-13 02:33:43.786821 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-13 02:33:44.805100 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:44.805198 | orchestrator | 2026-02-13 02:33:44.805213 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-13 02:33:44.847785 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:44.847881 | orchestrator | 2026-02-13 02:33:44.847898 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-13 02:33:47.734421 | orchestrator | changed: [testbed-manager] 2026-02-13 02:33:47.734560 | orchestrator | 2026-02-13 02:33:47.734588 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-13 02:33:47.779285 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:33:47.779365 | orchestrator | 2026-02-13 02:33:47.779380 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-13 02:35:22.070941 | orchestrator | changed: [testbed-manager] 2026-02-13 02:35:22.071041 | orchestrator | 2026-02-13 02:35:22.071059 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-13 02:35:23.130082 | orchestrator | ok: [testbed-manager] 2026-02-13 02:35:23.130127 | orchestrator | 2026-02-13 02:35:23.130134 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:35:23.130141 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-13 02:35:23.130147 | orchestrator | 2026-02-13 02:35:23.639809 | orchestrator | ok: Runtime: 0:02:16.898920 2026-02-13 02:35:23.657196 | 2026-02-13 02:35:23.657395 | TASK [Reboot manager] 2026-02-13 02:35:25.198537 | orchestrator | ok: Runtime: 0:00:00.927514 2026-02-13 02:35:25.215664 | 2026-02-13 02:35:25.215837 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-13 02:35:38.872495 | orchestrator | ok 2026-02-13 02:35:38.884150 | 2026-02-13 02:35:38.884358 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-13 02:36:38.936042 | orchestrator | ok 2026-02-13 02:36:38.946698 | 2026-02-13 02:36:38.946867 | TASK [Deploy manager + bootstrap nodes] 2026-02-13 02:36:41.340405 | orchestrator | 2026-02-13 02:36:41.340627 | orchestrator | # DEPLOY MANAGER 2026-02-13 02:36:41.340651 | orchestrator | 2026-02-13 02:36:41.340665 | orchestrator | + set -e 2026-02-13 02:36:41.340678 | orchestrator | + echo 2026-02-13 02:36:41.340692 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-13 02:36:41.340709 | orchestrator | + echo 2026-02-13 02:36:41.340758 | orchestrator | + cat /opt/manager-vars.sh 2026-02-13 02:36:41.343140 | orchestrator | export NUMBER_OF_NODES=6 2026-02-13 02:36:41.343189 | orchestrator | 2026-02-13 02:36:41.343210 | orchestrator | export CEPH_VERSION=reef 2026-02-13 02:36:41.343225 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-13 02:36:41.343238 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-13 02:36:41.343261 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-13 02:36:41.343272 | orchestrator | 2026-02-13 02:36:41.343290 | orchestrator | export ARA=false 2026-02-13 02:36:41.343301 | orchestrator | export DEPLOY_MODE=manager 2026-02-13 02:36:41.343319 | orchestrator | export TEMPEST=false 2026-02-13 02:36:41.343330 | orchestrator | export IS_ZUUL=true 2026-02-13 02:36:41.343347 | orchestrator | 2026-02-13 02:36:41.343408 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 02:36:41.343448 | orchestrator | export EXTERNAL_API=false 2026-02-13 02:36:41.343468 | orchestrator | 2026-02-13 02:36:41.343488 | orchestrator | export IMAGE_USER=ubuntu 2026-02-13 02:36:41.343504 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-13 02:36:41.343514 | orchestrator | 2026-02-13 02:36:41.343525 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-13 02:36:41.343544 | orchestrator | 2026-02-13 02:36:41.343556 | orchestrator | + echo 2026-02-13 02:36:41.343573 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 02:36:41.344328 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 02:36:41.344352 | orchestrator | ++ INTERACTIVE=false 2026-02-13 02:36:41.344372 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 02:36:41.344392 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 02:36:41.344578 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 02:36:41.344609 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 02:36:41.344631 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 02:36:41.344662 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 02:36:41.344681 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 02:36:41.344702 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 02:36:41.344732 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 02:36:41.344753 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 02:36:41.344773 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 02:36:41.344810 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 02:36:41.344846 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 02:36:41.344859 | orchestrator | ++ export ARA=false 2026-02-13 02:36:41.344870 | orchestrator | ++ ARA=false 2026-02-13 02:36:41.344880 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 02:36:41.344891 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 02:36:41.344901 | orchestrator | ++ export TEMPEST=false 2026-02-13 02:36:41.344912 | orchestrator | ++ TEMPEST=false 2026-02-13 02:36:41.344923 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 02:36:41.344933 | orchestrator | ++ IS_ZUUL=true 2026-02-13 02:36:41.344944 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 02:36:41.344955 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 02:36:41.344966 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 02:36:41.345011 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 02:36:41.345021 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 02:36:41.345032 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 02:36:41.345043 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 02:36:41.345053 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 02:36:41.345064 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 02:36:41.345075 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 02:36:41.345086 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-13 02:36:41.395810 | orchestrator | + docker version 2026-02-13 02:36:41.491642 | orchestrator | Client: Docker Engine - Community 2026-02-13 02:36:41.491746 | orchestrator | Version: 27.5.1 2026-02-13 02:36:41.491764 | orchestrator | API version: 1.47 2026-02-13 02:36:41.491776 | orchestrator | Go version: go1.22.11 2026-02-13 02:36:41.491787 | orchestrator | Git commit: 9f9e405 2026-02-13 02:36:41.491798 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-13 02:36:41.491811 | orchestrator | OS/Arch: linux/amd64 2026-02-13 02:36:41.491822 | orchestrator | Context: default 2026-02-13 02:36:41.491832 | orchestrator | 2026-02-13 02:36:41.491844 | orchestrator | Server: Docker Engine - Community 2026-02-13 02:36:41.491855 | orchestrator | Engine: 2026-02-13 02:36:41.491876 | orchestrator | Version: 27.5.1 2026-02-13 02:36:41.491896 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-13 02:36:41.491955 | orchestrator | Go version: go1.22.11 2026-02-13 02:36:41.492023 | orchestrator | Git commit: 4c9b3b0 2026-02-13 02:36:41.492042 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-13 02:36:41.492053 | orchestrator | OS/Arch: linux/amd64 2026-02-13 02:36:41.492064 | orchestrator | Experimental: false 2026-02-13 02:36:41.492074 | orchestrator | containerd: 2026-02-13 02:36:41.492086 | orchestrator | Version: v2.2.1 2026-02-13 02:36:41.492097 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-13 02:36:41.492108 | orchestrator | runc: 2026-02-13 02:36:41.492125 | orchestrator | Version: 1.3.4 2026-02-13 02:36:41.492143 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-13 02:36:41.492161 | orchestrator | docker-init: 2026-02-13 02:36:41.492178 | orchestrator | Version: 0.19.0 2026-02-13 02:36:41.492197 | orchestrator | GitCommit: de40ad0 2026-02-13 02:36:41.495284 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-13 02:36:41.505061 | orchestrator | + set -e 2026-02-13 02:36:41.505142 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 02:36:41.505163 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 02:36:41.505183 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 02:36:41.505201 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 02:36:41.505220 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 02:36:41.505237 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 02:36:41.505257 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 02:36:41.505276 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 02:36:41.505293 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 02:36:41.505310 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 02:36:41.505321 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 02:36:41.505332 | orchestrator | ++ export ARA=false 2026-02-13 02:36:41.505343 | orchestrator | ++ ARA=false 2026-02-13 02:36:41.505353 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 02:36:41.505364 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 02:36:41.505374 | orchestrator | ++ export TEMPEST=false 2026-02-13 02:36:41.505384 | orchestrator | ++ TEMPEST=false 2026-02-13 02:36:41.505402 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 02:36:41.505413 | orchestrator | ++ IS_ZUUL=true 2026-02-13 02:36:41.505424 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 02:36:41.505435 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 02:36:41.505446 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 02:36:41.505456 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 02:36:41.505476 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 02:36:41.505494 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 02:36:41.505512 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 02:36:41.505528 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 02:36:41.505544 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 02:36:41.505560 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 02:36:41.505577 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 02:36:41.505593 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 02:36:41.505609 | orchestrator | ++ INTERACTIVE=false 2026-02-13 02:36:41.505625 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 02:36:41.505647 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 02:36:41.505665 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-13 02:36:41.505682 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-13 02:36:41.512778 | orchestrator | + set -e 2026-02-13 02:36:41.512851 | orchestrator | + VERSION=9.5.0 2026-02-13 02:36:41.512868 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-13 02:36:41.522098 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-13 02:36:41.522163 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-13 02:36:41.526847 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-13 02:36:41.530841 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-13 02:36:41.537657 | orchestrator | /opt/configuration ~ 2026-02-13 02:36:41.537726 | orchestrator | + set -e 2026-02-13 02:36:41.537738 | orchestrator | + pushd /opt/configuration 2026-02-13 02:36:41.537748 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 02:36:41.538835 | orchestrator | + source /opt/venv/bin/activate 2026-02-13 02:36:41.541955 | orchestrator | ++ deactivate nondestructive 2026-02-13 02:36:41.542079 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:41.542093 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:41.542121 | orchestrator | ++ hash -r 2026-02-13 02:36:41.542128 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:41.542134 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-13 02:36:41.542140 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-13 02:36:41.542147 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-13 02:36:41.542165 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-13 02:36:41.542171 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-13 02:36:41.542177 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-13 02:36:41.542183 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-13 02:36:41.542190 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:36:41.542198 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:36:41.542204 | orchestrator | ++ export PATH 2026-02-13 02:36:41.542210 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:41.542216 | orchestrator | ++ '[' -z '' ']' 2026-02-13 02:36:41.542223 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-13 02:36:41.542228 | orchestrator | ++ PS1='(venv) ' 2026-02-13 02:36:41.542235 | orchestrator | ++ export PS1 2026-02-13 02:36:41.542241 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-13 02:36:41.542247 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-13 02:36:41.542253 | orchestrator | ++ hash -r 2026-02-13 02:36:41.542268 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-13 02:36:42.497804 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-13 02:36:42.498543 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-13 02:36:42.499676 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-13 02:36:42.501145 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-13 02:36:42.502269 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-13 02:36:42.512189 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-13 02:36:42.513520 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-13 02:36:42.514637 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-13 02:36:42.515935 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-13 02:36:42.545339 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-13 02:36:42.546622 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-13 02:36:42.548452 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-13 02:36:42.549714 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-13 02:36:42.553543 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-13 02:36:42.750845 | orchestrator | ++ which gilt 2026-02-13 02:36:42.754536 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-13 02:36:42.754569 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-13 02:36:42.967595 | orchestrator | osism.cfg-generics: 2026-02-13 02:36:43.099224 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-13 02:36:43.099322 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-13 02:36:43.099577 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-13 02:36:43.099673 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-13 02:36:43.635215 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-13 02:36:43.642615 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-13 02:36:44.058427 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-13 02:36:44.102456 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 02:36:44.102551 | orchestrator | + deactivate 2026-02-13 02:36:44.102567 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-13 02:36:44.102580 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:36:44.102591 | orchestrator | + export PATH 2026-02-13 02:36:44.102602 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-13 02:36:44.102613 | orchestrator | + '[' -n '' ']' 2026-02-13 02:36:44.102626 | orchestrator | + hash -r 2026-02-13 02:36:44.102637 | orchestrator | + '[' -n '' ']' 2026-02-13 02:36:44.102647 | orchestrator | + unset VIRTUAL_ENV 2026-02-13 02:36:44.102658 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-13 02:36:44.102669 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-13 02:36:44.102690 | orchestrator | ~ 2026-02-13 02:36:44.102701 | orchestrator | + unset -f deactivate 2026-02-13 02:36:44.102712 | orchestrator | + popd 2026-02-13 02:36:44.104090 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-13 02:36:44.104110 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-13 02:36:44.105083 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-13 02:36:44.155213 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 02:36:44.155285 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-13 02:36:44.155539 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-13 02:36:44.208701 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 02:36:44.209462 | orchestrator | ++ semver 2024.2 2025.1 2026-02-13 02:36:44.263713 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 02:36:44.263815 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-13 02:36:44.345402 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 02:36:44.345509 | orchestrator | + source /opt/venv/bin/activate 2026-02-13 02:36:44.345523 | orchestrator | ++ deactivate nondestructive 2026-02-13 02:36:44.345542 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:44.345554 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:44.345565 | orchestrator | ++ hash -r 2026-02-13 02:36:44.345592 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:44.345604 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-13 02:36:44.345615 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-13 02:36:44.345640 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-13 02:36:44.345653 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-13 02:36:44.345664 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-13 02:36:44.345676 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-13 02:36:44.345687 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-13 02:36:44.345699 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:36:44.345738 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:36:44.345756 | orchestrator | ++ export PATH 2026-02-13 02:36:44.345768 | orchestrator | ++ '[' -n '' ']' 2026-02-13 02:36:44.345783 | orchestrator | ++ '[' -z '' ']' 2026-02-13 02:36:44.345794 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-13 02:36:44.345805 | orchestrator | ++ PS1='(venv) ' 2026-02-13 02:36:44.345816 | orchestrator | ++ export PS1 2026-02-13 02:36:44.345827 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-13 02:36:44.345845 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-13 02:36:44.345856 | orchestrator | ++ hash -r 2026-02-13 02:36:44.345871 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-13 02:36:45.327155 | orchestrator | 2026-02-13 02:36:45.328052 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-13 02:36:45.328086 | orchestrator | 2026-02-13 02:36:45.328099 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-13 02:36:45.871466 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:45.871573 | orchestrator | 2026-02-13 02:36:45.871590 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-13 02:36:46.817259 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:46.817333 | orchestrator | 2026-02-13 02:36:46.817340 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-13 02:36:46.817364 | orchestrator | 2026-02-13 02:36:46.817369 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:36:48.929820 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:48.929903 | orchestrator | 2026-02-13 02:36:48.929913 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-13 02:36:48.984108 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:48.984211 | orchestrator | 2026-02-13 02:36:48.984229 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-13 02:36:49.429481 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:49.429584 | orchestrator | 2026-02-13 02:36:49.429604 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-13 02:36:49.470458 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:36:49.470585 | orchestrator | 2026-02-13 02:36:49.470611 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-13 02:36:49.808477 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:49.808580 | orchestrator | 2026-02-13 02:36:49.808596 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-13 02:36:50.139678 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:50.139778 | orchestrator | 2026-02-13 02:36:50.139795 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-13 02:36:50.241785 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:36:50.241879 | orchestrator | 2026-02-13 02:36:50.241894 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-13 02:36:50.241906 | orchestrator | 2026-02-13 02:36:50.241917 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:36:51.878907 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:51.879120 | orchestrator | 2026-02-13 02:36:51.879140 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-13 02:36:51.995596 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-13 02:36:51.995669 | orchestrator | 2026-02-13 02:36:51.995679 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-13 02:36:52.055393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-13 02:36:52.055481 | orchestrator | 2026-02-13 02:36:52.055494 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-13 02:36:53.108631 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-13 02:36:53.108723 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-13 02:36:53.108738 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-13 02:36:53.108751 | orchestrator | 2026-02-13 02:36:53.108766 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-13 02:36:54.853574 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-13 02:36:54.853680 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-13 02:36:54.853697 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-13 02:36:54.853710 | orchestrator | 2026-02-13 02:36:54.853722 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-13 02:36:55.456495 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:36:55.456596 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:55.456613 | orchestrator | 2026-02-13 02:36:55.456626 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-13 02:36:56.070250 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:36:56.070371 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:56.070388 | orchestrator | 2026-02-13 02:36:56.071230 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-13 02:36:56.124804 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:36:56.124882 | orchestrator | 2026-02-13 02:36:56.124895 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-13 02:36:56.486792 | orchestrator | ok: [testbed-manager] 2026-02-13 02:36:56.486892 | orchestrator | 2026-02-13 02:36:56.486908 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-13 02:36:56.560012 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-13 02:36:56.560183 | orchestrator | 2026-02-13 02:36:56.560203 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-13 02:36:57.544555 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:57.544656 | orchestrator | 2026-02-13 02:36:57.544674 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-13 02:36:58.295682 | orchestrator | changed: [testbed-manager] 2026-02-13 02:36:58.295790 | orchestrator | 2026-02-13 02:36:58.295806 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-13 02:37:08.054340 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:08.054450 | orchestrator | 2026-02-13 02:37:08.054466 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-13 02:37:08.117170 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:37:08.117351 | orchestrator | 2026-02-13 02:37:08.117394 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-13 02:37:08.117408 | orchestrator | 2026-02-13 02:37:08.117420 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:37:09.979231 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:09.979338 | orchestrator | 2026-02-13 02:37:09.979354 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-13 02:37:10.093734 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-13 02:37:10.093836 | orchestrator | 2026-02-13 02:37:10.093851 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-13 02:37:10.152493 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 02:37:10.152587 | orchestrator | 2026-02-13 02:37:10.152603 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-13 02:37:12.589802 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:12.589920 | orchestrator | 2026-02-13 02:37:12.589936 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-13 02:37:12.647671 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:12.647767 | orchestrator | 2026-02-13 02:37:12.647783 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-13 02:37:12.790763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-13 02:37:12.790860 | orchestrator | 2026-02-13 02:37:12.790876 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-13 02:37:15.632929 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-13 02:37:15.633032 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-13 02:37:15.633047 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-13 02:37:15.633059 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-13 02:37:15.633071 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-13 02:37:15.633112 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-13 02:37:15.633123 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-13 02:37:15.633134 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-13 02:37:15.633145 | orchestrator | 2026-02-13 02:37:15.633158 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-13 02:37:16.289151 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:16.289252 | orchestrator | 2026-02-13 02:37:16.289271 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-13 02:37:16.918560 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:16.918662 | orchestrator | 2026-02-13 02:37:16.918679 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-13 02:37:16.999673 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-13 02:37:16.999769 | orchestrator | 2026-02-13 02:37:16.999785 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-13 02:37:18.243763 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-13 02:37:18.243875 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-13 02:37:18.243892 | orchestrator | 2026-02-13 02:37:18.243912 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-13 02:37:18.849731 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:18.849830 | orchestrator | 2026-02-13 02:37:18.849847 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-13 02:37:18.908679 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:37:18.908774 | orchestrator | 2026-02-13 02:37:18.908789 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-13 02:37:18.980882 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-13 02:37:18.980977 | orchestrator | 2026-02-13 02:37:18.980993 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-13 02:37:19.587646 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:19.587732 | orchestrator | 2026-02-13 02:37:19.587747 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-13 02:37:19.660584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-13 02:37:19.660661 | orchestrator | 2026-02-13 02:37:19.660675 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-13 02:37:21.073561 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:37:21.073645 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:37:21.073661 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:21.073675 | orchestrator | 2026-02-13 02:37:21.073686 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-13 02:37:21.706626 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:21.706716 | orchestrator | 2026-02-13 02:37:21.706732 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-13 02:37:21.758648 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:37:21.758738 | orchestrator | 2026-02-13 02:37:21.758755 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-13 02:37:21.860612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-13 02:37:21.860705 | orchestrator | 2026-02-13 02:37:21.860719 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-13 02:37:22.373348 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:22.373395 | orchestrator | 2026-02-13 02:37:22.373402 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-13 02:37:22.742976 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:22.743147 | orchestrator | 2026-02-13 02:37:22.743177 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-13 02:37:23.935681 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-13 02:37:23.935794 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-13 02:37:23.935811 | orchestrator | 2026-02-13 02:37:23.935825 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-13 02:37:24.554834 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:24.554937 | orchestrator | 2026-02-13 02:37:24.554953 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-13 02:37:24.913909 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:24.914136 | orchestrator | 2026-02-13 02:37:24.914162 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-13 02:37:25.271669 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:25.271775 | orchestrator | 2026-02-13 02:37:25.271791 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-13 02:37:25.321656 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:37:25.321756 | orchestrator | 2026-02-13 02:37:25.321772 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-13 02:37:25.398380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-13 02:37:25.398508 | orchestrator | 2026-02-13 02:37:25.398524 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-13 02:37:25.435062 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:25.435172 | orchestrator | 2026-02-13 02:37:25.435186 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-13 02:37:27.451349 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-13 02:37:27.451450 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-13 02:37:27.451467 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-13 02:37:27.451480 | orchestrator | 2026-02-13 02:37:27.451493 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-13 02:37:28.150663 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:28.150783 | orchestrator | 2026-02-13 02:37:28.150800 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-13 02:37:28.863523 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:28.863598 | orchestrator | 2026-02-13 02:37:28.863607 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-13 02:37:29.546465 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:29.546570 | orchestrator | 2026-02-13 02:37:29.546588 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-13 02:37:29.627665 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-13 02:37:29.627787 | orchestrator | 2026-02-13 02:37:29.627815 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-13 02:37:29.671039 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:29.671156 | orchestrator | 2026-02-13 02:37:29.671171 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-13 02:37:30.401551 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-13 02:37:30.401671 | orchestrator | 2026-02-13 02:37:30.401696 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-13 02:37:30.493871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-13 02:37:30.493964 | orchestrator | 2026-02-13 02:37:30.493980 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-13 02:37:31.202706 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:31.202807 | orchestrator | 2026-02-13 02:37:31.202825 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-13 02:37:31.800322 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:31.800424 | orchestrator | 2026-02-13 02:37:31.800441 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-13 02:37:31.858945 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:37:31.859044 | orchestrator | 2026-02-13 02:37:31.859062 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-13 02:37:31.920308 | orchestrator | ok: [testbed-manager] 2026-02-13 02:37:31.920402 | orchestrator | 2026-02-13 02:37:31.920417 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-13 02:37:32.719845 | orchestrator | changed: [testbed-manager] 2026-02-13 02:37:32.719949 | orchestrator | 2026-02-13 02:37:32.719967 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-13 02:38:45.141917 | orchestrator | changed: [testbed-manager] 2026-02-13 02:38:45.142129 | orchestrator | 2026-02-13 02:38:45.142162 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-13 02:38:46.111661 | orchestrator | ok: [testbed-manager] 2026-02-13 02:38:46.111762 | orchestrator | 2026-02-13 02:38:46.111778 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-13 02:38:46.172133 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:38:46.172226 | orchestrator | 2026-02-13 02:38:46.172241 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-13 02:38:53.625560 | orchestrator | changed: [testbed-manager] 2026-02-13 02:38:53.625652 | orchestrator | 2026-02-13 02:38:53.625664 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-13 02:38:53.675263 | orchestrator | ok: [testbed-manager] 2026-02-13 02:38:53.675429 | orchestrator | 2026-02-13 02:38:53.675448 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-13 02:38:53.675462 | orchestrator | 2026-02-13 02:38:53.675474 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-13 02:38:53.834676 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:38:53.834780 | orchestrator | 2026-02-13 02:38:53.834796 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-13 02:39:53.893855 | orchestrator | Pausing for 60 seconds 2026-02-13 02:39:53.893962 | orchestrator | changed: [testbed-manager] 2026-02-13 02:39:53.893978 | orchestrator | 2026-02-13 02:39:53.893991 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-13 02:39:56.870186 | orchestrator | changed: [testbed-manager] 2026-02-13 02:39:56.870381 | orchestrator | 2026-02-13 02:39:56.870405 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-13 02:40:38.368951 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-13 02:40:38.369053 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-13 02:40:38.369064 | orchestrator | changed: [testbed-manager] 2026-02-13 02:40:38.369074 | orchestrator | 2026-02-13 02:40:38.369102 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-13 02:40:48.451548 | orchestrator | changed: [testbed-manager] 2026-02-13 02:40:48.451646 | orchestrator | 2026-02-13 02:40:48.451727 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-13 02:40:48.570660 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-13 02:40:48.570813 | orchestrator | 2026-02-13 02:40:48.570827 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-13 02:40:48.570837 | orchestrator | 2026-02-13 02:40:48.570847 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-13 02:40:48.623746 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:40:48.623827 | orchestrator | 2026-02-13 02:40:48.623840 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-13 02:40:48.692261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-13 02:40:48.692367 | orchestrator | 2026-02-13 02:40:48.692391 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-13 02:40:49.481758 | orchestrator | changed: [testbed-manager] 2026-02-13 02:40:49.481885 | orchestrator | 2026-02-13 02:40:49.481915 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-13 02:40:52.742628 | orchestrator | ok: [testbed-manager] 2026-02-13 02:40:52.742793 | orchestrator | 2026-02-13 02:40:52.742812 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-13 02:40:52.801553 | orchestrator | ok: [testbed-manager] => { 2026-02-13 02:40:52.801650 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-13 02:40:52.801664 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-13 02:40:52.801704 | orchestrator | "Checking running containers against expected versions...", 2026-02-13 02:40:52.801728 | orchestrator | "", 2026-02-13 02:40:52.801748 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-13 02:40:52.801768 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-13 02:40:52.801788 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.801805 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-13 02:40:52.801816 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.801827 | orchestrator | "", 2026-02-13 02:40:52.801838 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-13 02:40:52.801850 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-13 02:40:52.801861 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.801903 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-13 02:40:52.801915 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.801992 | orchestrator | "", 2026-02-13 02:40:52.802005 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-13 02:40:52.802077 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-13 02:40:52.802090 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802104 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-13 02:40:52.802117 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802129 | orchestrator | "", 2026-02-13 02:40:52.802142 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-13 02:40:52.802155 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-13 02:40:52.802167 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802180 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-13 02:40:52.802192 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802205 | orchestrator | "", 2026-02-13 02:40:52.802217 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-13 02:40:52.802233 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-13 02:40:52.802246 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802258 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-13 02:40:52.802271 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802283 | orchestrator | "", 2026-02-13 02:40:52.802296 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-13 02:40:52.802310 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802323 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802335 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802349 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802361 | orchestrator | "", 2026-02-13 02:40:52.802373 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-13 02:40:52.802385 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-13 02:40:52.802398 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802411 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-13 02:40:52.802424 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802437 | orchestrator | "", 2026-02-13 02:40:52.802449 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-13 02:40:52.802462 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-13 02:40:52.802472 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802483 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-13 02:40:52.802494 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802505 | orchestrator | "", 2026-02-13 02:40:52.802515 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-13 02:40:52.802527 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-13 02:40:52.802537 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802548 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-13 02:40:52.802559 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802570 | orchestrator | "", 2026-02-13 02:40:52.802580 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-13 02:40:52.802591 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-13 02:40:52.802602 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802613 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-13 02:40:52.802623 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802634 | orchestrator | "", 2026-02-13 02:40:52.802645 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-13 02:40:52.802656 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802667 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802724 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802737 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802748 | orchestrator | "", 2026-02-13 02:40:52.802758 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-13 02:40:52.802769 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802780 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802791 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802801 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802812 | orchestrator | "", 2026-02-13 02:40:52.802824 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-13 02:40:52.802835 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802845 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802856 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802867 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802877 | orchestrator | "", 2026-02-13 02:40:52.802888 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-13 02:40:52.802899 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802909 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.802920 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802950 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.802962 | orchestrator | "", 2026-02-13 02:40:52.802972 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-13 02:40:52.802983 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.802994 | orchestrator | " Enabled: true", 2026-02-13 02:40:52.803015 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-13 02:40:52.803027 | orchestrator | " Status: ✅ MATCH", 2026-02-13 02:40:52.803038 | orchestrator | "", 2026-02-13 02:40:52.803048 | orchestrator | "=== Summary ===", 2026-02-13 02:40:52.803059 | orchestrator | "Errors (version mismatches): 0", 2026-02-13 02:40:52.803070 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-13 02:40:52.803081 | orchestrator | "", 2026-02-13 02:40:52.803092 | orchestrator | "✅ All running containers match expected versions!" 2026-02-13 02:40:52.803103 | orchestrator | ] 2026-02-13 02:40:52.803114 | orchestrator | } 2026-02-13 02:40:52.803126 | orchestrator | 2026-02-13 02:40:52.803137 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-13 02:40:52.858591 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:40:52.858744 | orchestrator | 2026-02-13 02:40:52.858770 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:40:52.858791 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-13 02:40:52.858810 | orchestrator | 2026-02-13 02:40:52.956499 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 02:40:52.956602 | orchestrator | + deactivate 2026-02-13 02:40:52.956619 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-13 02:40:52.956633 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 02:40:52.956644 | orchestrator | + export PATH 2026-02-13 02:40:52.956656 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-13 02:40:52.956667 | orchestrator | + '[' -n '' ']' 2026-02-13 02:40:52.956730 | orchestrator | + hash -r 2026-02-13 02:40:52.956742 | orchestrator | + '[' -n '' ']' 2026-02-13 02:40:52.956752 | orchestrator | + unset VIRTUAL_ENV 2026-02-13 02:40:52.956763 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-13 02:40:52.956774 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-13 02:40:52.956785 | orchestrator | + unset -f deactivate 2026-02-13 02:40:52.956797 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-13 02:40:52.963388 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 02:40:52.963467 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-13 02:40:52.963490 | orchestrator | + local max_attempts=60 2026-02-13 02:40:52.963509 | orchestrator | + local name=ceph-ansible 2026-02-13 02:40:52.963560 | orchestrator | + local attempt_num=1 2026-02-13 02:40:52.964192 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:40:53.002118 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:40:53.002207 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-13 02:40:53.002221 | orchestrator | + local max_attempts=60 2026-02-13 02:40:53.002233 | orchestrator | + local name=kolla-ansible 2026-02-13 02:40:53.002244 | orchestrator | + local attempt_num=1 2026-02-13 02:40:53.002461 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-13 02:40:53.044800 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:40:53.044894 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-13 02:40:53.044909 | orchestrator | + local max_attempts=60 2026-02-13 02:40:53.044921 | orchestrator | + local name=osism-ansible 2026-02-13 02:40:53.044932 | orchestrator | + local attempt_num=1 2026-02-13 02:40:53.046152 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-13 02:40:53.085635 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:40:53.085765 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-13 02:40:53.085779 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-13 02:40:53.814355 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-13 02:40:54.005740 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-13 02:40:54.005846 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.005864 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.005876 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-02-13 02:40:54.005889 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up About a minute (healthy) 8000/tcp 2026-02-13 02:40:54.005901 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.005935 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.006226 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up 57 seconds (healthy) 2026-02-13 02:40:54.006326 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.006341 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up About a minute (healthy) 3306/tcp 2026-02-13 02:40:54.006354 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.006366 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up About a minute (healthy) 6379/tcp 2026-02-13 02:40:54.006377 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.006418 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-02-13 02:40:54.006432 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.006443 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up About a minute (healthy) 2026-02-13 02:40:54.011534 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-13 02:40:54.059044 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 02:40:54.059160 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-13 02:40:54.063858 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-13 02:41:06.350399 | orchestrator | 2026-02-13 02:41:06 | INFO  | Task e3724a45-174f-44ff-941c-11d135170cc0 (resolvconf) was prepared for execution. 2026-02-13 02:41:06.350550 | orchestrator | 2026-02-13 02:41:06 | INFO  | It takes a moment until task e3724a45-174f-44ff-941c-11d135170cc0 (resolvconf) has been started and output is visible here. 2026-02-13 02:41:19.220501 | orchestrator | 2026-02-13 02:41:19.220648 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-13 02:41:19.220677 | orchestrator | 2026-02-13 02:41:19.220697 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:41:19.220710 | orchestrator | Friday 13 February 2026 02:41:09 +0000 (0:00:00.104) 0:00:00.104 ******* 2026-02-13 02:41:19.220722 | orchestrator | ok: [testbed-manager] 2026-02-13 02:41:19.220733 | orchestrator | 2026-02-13 02:41:19.220805 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-13 02:41:19.220829 | orchestrator | Friday 13 February 2026 02:41:13 +0000 (0:00:03.436) 0:00:03.540 ******* 2026-02-13 02:41:19.220849 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:41:19.220869 | orchestrator | 2026-02-13 02:41:19.220888 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-13 02:41:19.220909 | orchestrator | Friday 13 February 2026 02:41:13 +0000 (0:00:00.069) 0:00:03.610 ******* 2026-02-13 02:41:19.220929 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-13 02:41:19.220943 | orchestrator | 2026-02-13 02:41:19.220955 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-13 02:41:19.220966 | orchestrator | Friday 13 February 2026 02:41:13 +0000 (0:00:00.082) 0:00:03.692 ******* 2026-02-13 02:41:19.220977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 02:41:19.220988 | orchestrator | 2026-02-13 02:41:19.220999 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-13 02:41:19.221029 | orchestrator | Friday 13 February 2026 02:41:13 +0000 (0:00:00.081) 0:00:03.773 ******* 2026-02-13 02:41:19.221043 | orchestrator | ok: [testbed-manager] 2026-02-13 02:41:19.221056 | orchestrator | 2026-02-13 02:41:19.221068 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-13 02:41:19.221081 | orchestrator | Friday 13 February 2026 02:41:14 +0000 (0:00:01.025) 0:00:04.799 ******* 2026-02-13 02:41:19.221094 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:41:19.221106 | orchestrator | 2026-02-13 02:41:19.221118 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-13 02:41:19.221131 | orchestrator | Friday 13 February 2026 02:41:14 +0000 (0:00:00.063) 0:00:04.862 ******* 2026-02-13 02:41:19.221143 | orchestrator | ok: [testbed-manager] 2026-02-13 02:41:19.221155 | orchestrator | 2026-02-13 02:41:19.221167 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-13 02:41:19.221205 | orchestrator | Friday 13 February 2026 02:41:15 +0000 (0:00:00.490) 0:00:05.353 ******* 2026-02-13 02:41:19.221218 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:41:19.221230 | orchestrator | 2026-02-13 02:41:19.221243 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-13 02:41:19.221256 | orchestrator | Friday 13 February 2026 02:41:15 +0000 (0:00:00.079) 0:00:05.433 ******* 2026-02-13 02:41:19.221268 | orchestrator | changed: [testbed-manager] 2026-02-13 02:41:19.221280 | orchestrator | 2026-02-13 02:41:19.221293 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-13 02:41:19.221305 | orchestrator | Friday 13 February 2026 02:41:15 +0000 (0:00:00.522) 0:00:05.955 ******* 2026-02-13 02:41:19.221318 | orchestrator | changed: [testbed-manager] 2026-02-13 02:41:19.221329 | orchestrator | 2026-02-13 02:41:19.221341 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-13 02:41:19.221353 | orchestrator | Friday 13 February 2026 02:41:16 +0000 (0:00:01.041) 0:00:06.996 ******* 2026-02-13 02:41:19.221365 | orchestrator | ok: [testbed-manager] 2026-02-13 02:41:19.221378 | orchestrator | 2026-02-13 02:41:19.221391 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-13 02:41:19.221403 | orchestrator | Friday 13 February 2026 02:41:17 +0000 (0:00:00.947) 0:00:07.943 ******* 2026-02-13 02:41:19.221416 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-13 02:41:19.221431 | orchestrator | 2026-02-13 02:41:19.221450 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-13 02:41:19.221470 | orchestrator | Friday 13 February 2026 02:41:17 +0000 (0:00:00.083) 0:00:08.027 ******* 2026-02-13 02:41:19.221489 | orchestrator | changed: [testbed-manager] 2026-02-13 02:41:19.221507 | orchestrator | 2026-02-13 02:41:19.221519 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:41:19.221530 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 02:41:19.221541 | orchestrator | 2026-02-13 02:41:19.221551 | orchestrator | 2026-02-13 02:41:19.221570 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:41:19.221589 | orchestrator | Friday 13 February 2026 02:41:18 +0000 (0:00:01.150) 0:00:09.178 ******* 2026-02-13 02:41:19.221607 | orchestrator | =============================================================================== 2026-02-13 02:41:19.221626 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2026-02-13 02:41:19.221640 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.15s 2026-02-13 02:41:19.221651 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2026-02-13 02:41:19.221661 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.03s 2026-02-13 02:41:19.221672 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.95s 2026-02-13 02:41:19.221682 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2026-02-13 02:41:19.221794 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2026-02-13 02:41:19.221810 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2026-02-13 02:41:19.221821 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-02-13 02:41:19.221832 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2026-02-13 02:41:19.221843 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-02-13 02:41:19.221853 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-02-13 02:41:19.221864 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-02-13 02:41:19.524311 | orchestrator | + osism apply sshconfig 2026-02-13 02:41:31.603961 | orchestrator | 2026-02-13 02:41:31 | INFO  | Task 03b3c9a3-f51c-4fca-ab7a-68bf568f8dc6 (sshconfig) was prepared for execution. 2026-02-13 02:41:31.604099 | orchestrator | 2026-02-13 02:41:31 | INFO  | It takes a moment until task 03b3c9a3-f51c-4fca-ab7a-68bf568f8dc6 (sshconfig) has been started and output is visible here. 2026-02-13 02:41:42.624129 | orchestrator | 2026-02-13 02:41:42.624248 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-13 02:41:42.624266 | orchestrator | 2026-02-13 02:41:42.624278 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-13 02:41:42.624290 | orchestrator | Friday 13 February 2026 02:41:35 +0000 (0:00:00.116) 0:00:00.116 ******* 2026-02-13 02:41:42.624301 | orchestrator | ok: [testbed-manager] 2026-02-13 02:41:42.624313 | orchestrator | 2026-02-13 02:41:42.624344 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-13 02:41:42.624357 | orchestrator | Friday 13 February 2026 02:41:36 +0000 (0:00:00.490) 0:00:00.607 ******* 2026-02-13 02:41:42.624367 | orchestrator | changed: [testbed-manager] 2026-02-13 02:41:42.624379 | orchestrator | 2026-02-13 02:41:42.624390 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-13 02:41:42.624401 | orchestrator | Friday 13 February 2026 02:41:36 +0000 (0:00:00.449) 0:00:01.057 ******* 2026-02-13 02:41:42.624412 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-13 02:41:42.624424 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-13 02:41:42.624435 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-13 02:41:42.624446 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-13 02:41:42.624457 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-13 02:41:42.624468 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-13 02:41:42.624478 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-13 02:41:42.624489 | orchestrator | 2026-02-13 02:41:42.624500 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-13 02:41:42.624511 | orchestrator | Friday 13 February 2026 02:41:41 +0000 (0:00:05.279) 0:00:06.337 ******* 2026-02-13 02:41:42.624522 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:41:42.624533 | orchestrator | 2026-02-13 02:41:42.624543 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-13 02:41:42.624554 | orchestrator | Friday 13 February 2026 02:41:41 +0000 (0:00:00.074) 0:00:06.411 ******* 2026-02-13 02:41:42.624565 | orchestrator | changed: [testbed-manager] 2026-02-13 02:41:42.624576 | orchestrator | 2026-02-13 02:41:42.624587 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:41:42.624599 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:41:42.624611 | orchestrator | 2026-02-13 02:41:42.624622 | orchestrator | 2026-02-13 02:41:42.624632 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:41:42.624643 | orchestrator | Friday 13 February 2026 02:41:42 +0000 (0:00:00.550) 0:00:06.961 ******* 2026-02-13 02:41:42.624654 | orchestrator | =============================================================================== 2026-02-13 02:41:42.624665 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.28s 2026-02-13 02:41:42.624678 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2026-02-13 02:41:42.624690 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.49s 2026-02-13 02:41:42.624703 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.45s 2026-02-13 02:41:42.624716 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2026-02-13 02:41:42.915080 | orchestrator | + osism apply known-hosts 2026-02-13 02:41:54.851440 | orchestrator | 2026-02-13 02:41:54 | INFO  | Task 80f070e5-6522-45aa-9492-bba72524125b (known-hosts) was prepared for execution. 2026-02-13 02:41:54.851553 | orchestrator | 2026-02-13 02:41:54 | INFO  | It takes a moment until task 80f070e5-6522-45aa-9492-bba72524125b (known-hosts) has been started and output is visible here. 2026-02-13 02:42:11.312715 | orchestrator | 2026-02-13 02:42:11.312826 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-13 02:42:11.312842 | orchestrator | 2026-02-13 02:42:11.312855 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-13 02:42:11.312868 | orchestrator | Friday 13 February 2026 02:41:58 +0000 (0:00:00.156) 0:00:00.156 ******* 2026-02-13 02:42:11.312908 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-13 02:42:11.312922 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-13 02:42:11.312933 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-13 02:42:11.312944 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-13 02:42:11.312955 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-13 02:42:11.312966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-13 02:42:11.312977 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-13 02:42:11.312988 | orchestrator | 2026-02-13 02:42:11.312999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-13 02:42:11.313011 | orchestrator | Friday 13 February 2026 02:42:04 +0000 (0:00:05.742) 0:00:05.899 ******* 2026-02-13 02:42:11.313023 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-13 02:42:11.313036 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-13 02:42:11.313047 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-13 02:42:11.313058 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-13 02:42:11.313069 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-13 02:42:11.313080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-13 02:42:11.313102 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-13 02:42:11.313113 | orchestrator | 2026-02-13 02:42:11.313124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313136 | orchestrator | Friday 13 February 2026 02:42:04 +0000 (0:00:00.167) 0:00:06.066 ******* 2026-02-13 02:42:11.313148 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGjMsrH4nq1zLeIGs6nslIaeT4g+n/koan0t8KXEZrNizrMNLovpTf72IPU1FJesKx6wv3mlavAdZscpArDs6Y=) 2026-02-13 02:42:11.313168 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4r679HCbyd3ftzr0hbGaXIy43bRvaagZmQTiGxUn5DBgpOU7mG/1ZtTSf6p2umRgpocQvESvE4aB4/qo359DZcMsyEABmZLOBO0JvoIWoZqIxZmoe4dH/p3OT8SWY5REFdZrNlfAKojKyeW/tyGG3A8vzWcmCHqmKOhYVOCWOqjwi/wnRl15nWjdk124H7NgKcmCxsxe2ipp0qRMOBqwXQAKALyMJ/DGNbAdCuPKV+M19FCazoROqXgBGTsq7/H8qyK7Lpw0LjzcbTxropzj5Fzuzq3FdIuvOFIxtaJAv2/+f82NF3S+W7sGQuULOea6KCiV3ZyVBmtN4I93pGL7WzFWToHxxcw2UVsy7E66YJvT5NO8tawrvqZ/cK1X1wPjhLbreT2PYDeYttH9ALJiKQ60bbZ5cF7mxNZq5ammn2lw+/CeBz5IMFB2Zif6s9NX+y3scKJAi1ZsOycjT3WUMBVm1wnkZtjTsLREVFlhTkIGrOrpIAHSTubVv4x/E1Nk=) 2026-02-13 02:42:11.313206 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJJKc1LeEAJN1+Q+2kctfYYrk33j3tS4n0C992waQMs0) 2026-02-13 02:42:11.313219 | orchestrator | 2026-02-13 02:42:11.313230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313242 | orchestrator | Friday 13 February 2026 02:42:06 +0000 (0:00:01.222) 0:00:07.288 ******* 2026-02-13 02:42:11.313255 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsgvutxP6m4MBMoGI8Lp3sOUj6Mk894huP1d0OEPE3+) 2026-02-13 02:42:11.313300 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqq2IAdsOyqnYq+TxDx1n+kX/XYyHLJF0TNxsba9wPpyYhfH4kSA5+NgCQpoPYij0qVt6LCs80Jei9mcEvmYHvTIc3D7IKe8elwImYgv+3s2VhRbtn3W8xM8LptPhJBCnl/c4tH0r2HM0ZKUu5DHLRP2TmjdNrEpu3nsEHdWIZBbnC+CZ6+vzni1DkjX7RJTIXiVrjMfeH5+A/NvdxY49zKrTzsd0HMTAd9n4uARDuIYdIzFsmir0WcFJS5MovnBC4VJi7kryt5slOFYEczyQnTHsQrDIkVUYfgkDTMXgYyj8tDSVs4WRZRG484aP8X+AfHjS8tnfojNzJVVT3+pTdthKEITwSoeU1tXckvMsQWCZ0pXujin/Hq8fPYS2P4dKsd/Og1l8ONQmyg81Lf2fGlpStEzbBMWl6d6m9jWc3Q4l/HbZ6K545+0VprProaelEOV6tHbtyThPq/iTwVJvdarxeleFhgcODuTetyX6BLus+xnKAIWIRjh8PmlqM+Fs=) 2026-02-13 02:42:11.313314 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3zuaoQlkJTXqfpi7F4iSwsomQg+QSJ+XGPIR88gWQ5x9i1jsBwWiqT6J535Z7mrMa7fw/KNddnmFN9Gsb4aj0=) 2026-02-13 02:42:11.313326 | orchestrator | 2026-02-13 02:42:11.313338 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313350 | orchestrator | Friday 13 February 2026 02:42:07 +0000 (0:00:01.060) 0:00:08.349 ******* 2026-02-13 02:42:11.313363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE+jhZ8MiPIBxT5vxliKQuxfW8zyfJXtg5JuJ2P4dvskWJyhzs1ehfYcOO8HIOzLpuB/8TRoWNbEudP5g7fXr+Y=) 2026-02-13 02:42:11.313376 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIullzqz5EbudNRTwnxbSr78UiJB0JM7MM4ga1cMpFtI) 2026-02-13 02:42:11.313388 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvSiAYM6d8pTrkyZdItiv7kETrTz6i1RjNiJpzG1suoHtukoX04AEGjKVU5WY1pgiWT3ZfPNCpQ8XRlJSjnuKkgx2+iTc3QkS+cro5GIsDxGhjDMA+4/aR4tPecnmgs0LBWV9yVmDbcQ/+KVnJUR1EPqTfY/y5+444s16CDBMqVlAcadeLnR8YXD21CxQAxcQ4qoQ/bPCZv7xStHVXOfEIocDgGFrR0Dq9oZ27mnsZ0UWGbYUDrQeVhwGL/P5g4USTZp8BNuegRenulS0faWepsr3scJLIRZNGIzdzjoXamwibluGmdytsNR7dlIRGDUGqKjdyZXrW+vyzVAtt9Hci7y37nKoR2Z/7Q5z4Tf5xiMiuR6Jgeewhk0dflm9ueKtvsgOj5Q+BW/A9tNze2KpmvomFlUKivsB/Lr01XDqrDw77kSiK99nRlztF1A6wrbO+LEL6jc9/sbOW/YhoMXylza1Uco+KcVF2IZJ9EDTVc8qnIXRNpC1sSOjm8WFP3AU=) 2026-02-13 02:42:11.313400 | orchestrator | 2026-02-13 02:42:11.313411 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313422 | orchestrator | Friday 13 February 2026 02:42:08 +0000 (0:00:01.031) 0:00:09.381 ******* 2026-02-13 02:42:11.313433 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbRTOKf7NTeNs9wOdKKeM88Ggas2Y6YATnjV7kXOFZZbBNw52bdbug62x+f21cFocecXzW4k8+ZmVqTXvjqoWE2NICzC7lo1tMhNw0BYW54F4yX8+O3rN0EKwKMVsJRiGdIYLzvgn2lnBhdxFrI/Ktirr1r+bTT/LVja78p+KDxenzj7kUPony6wuQMoR7I+2FN4tjStec7EljgkWMSGyAXRjDJJtfI09QItu9kdosC+UznKLveLIvU2xLdHhF0gYYQ6iF7nWjzagDdkTrthhwuV4BpW6lX7lpgY2aPh0yrt68adsMpLBlj3TnA6hfdA0740fo8/Ssn1wRepCsq2VFPQ0EOpqFosZ5jGj7ncJpEApNVF+a3rq6kAwMX5p54Or3kbRDt+AxuZfTccWVcSOPh+W5hChrWuxM+NuTfUrrjP1K8vJ3djKqBxEy5FVtx23TUH5Lr4TUggyWL1rzLfhWLswdXfpOmYrXZCF6TMBUFllerrhuR6cXzy4tpwErV/c=) 2026-02-13 02:42:11.313444 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFLfOGdv3jZ9QJXhOH7mdwxIra59q0BL9Z+XpTy0YPI0GnPfY5dqTuscWEA4AymQjgg2b9qcX5yFSIpas5DDZc=) 2026-02-13 02:42:11.313463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID7RfKe7PODIUXjfeSInT53+G/mqam4C8xb0Tz0s6GQJ) 2026-02-13 02:42:11.313474 | orchestrator | 2026-02-13 02:42:11.313485 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313496 | orchestrator | Friday 13 February 2026 02:42:09 +0000 (0:00:01.032) 0:00:10.413 ******* 2026-02-13 02:42:11.313507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBEqGPDMxegw+rHCBrLDzBEUvXoAaQaMSC+P8hKJG6iOu8zhAZSTy9jNBoQ0DhLO//uSyP8A8Hbq3hLYIdrqccw=) 2026-02-13 02:42:11.313644 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDp4c/Meuy2N/bh8gja3/+qrxsX0XRtdtQvBjLXaIqNP) 2026-02-13 02:42:11.313658 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCR88I9l6r2RkwRos7udv1d2ALHirSTsAneDqyM6GIpy/6dVJd/yL05ycFfWEKF02LArtVhKD15AE52NUZx61ueF465Qd02TRwBH8H+xGMSc3/9d0aKsJBjHA+tMNw0zLALKTtos1lGOyAIfLUxVtVEZyPHIw0jfHqnYQdxGwI2RO1rKPYTouaHfEqhg3zdx495eKnU6HL9NkhBK0yCCI1Lqeu4nR6+iz9RNwwjCBdpDRT5vmo22N15ePtw1vWvU7qwpIpFXpbd7al75+/FfZikH5zibOGhD3O7jJhY1HiESOQg42krkxJxnbf3CxktqSZpf1B3F4Swy39TVS60phC1axXMLwN8V8nJmYgx1bHitY7TA1bD/cOvJ6lFIqSUqyXBkbfUsF2entRmWF3B1OtY+s/fFBd2LTY3b4YTjUGI5zkq3ABT8IPU5m9k1355xQFyORiB88eJjM6dlLbqfI1JJiDfGwJx1cAbU103GKq7fAwkWYfyS/vYmBoWuZoTCG0=) 2026-02-13 02:42:11.313669 | orchestrator | 2026-02-13 02:42:11.313680 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:11.313691 | orchestrator | Friday 13 February 2026 02:42:10 +0000 (0:00:01.012) 0:00:11.426 ******* 2026-02-13 02:42:11.313709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDSqYFeoZB/aPMIZaVFyhZi9eHTBVg3jHFIsh9kZb/h) 2026-02-13 02:42:22.049397 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4OFol91iUkZovqRrEPjTT9DNh8yVBV/m1/l5DiglOZ04IDXXjJBtXyiCt0JYC+awKOwlekXeWt3cqgjhMTjq9R1PfLkHthl72pba3nyWqRxWRD4QiPPrJ4kiZOpLIU4tYXYmbsL2+0fHUuvpuDKlOD8GP2c0LuUHu33qqrKBS88Ky+Kl6FwZVMBmt1fzyOKsCarCj6bx5mx0DNBopcOlmM3aill1WoZizgy4Rlvd3LU16WRM7zp78PbqmKlXS+EaaU7yr48hZnzt0878v3EMWRTIiwgjTegr8ux6TGAS2/Tvb8MAm4NirqSMMjwYuUz1U3CN5TOmkNfFTZhJi0dQ6S3IMj/fsM4kbW14naz2hc2v71VxAIPJ0ZS0+sZgE9B3h4uRWwUtTsRaIZFn2sLZC7qIv5COK8UMEdcSq4KSRy2wArOrmXVOG3vTObZf4c7OqAt9JvAtpmQWb0/Ap+O1rhTJllAwoCfd7gO0GMP+8Dyn++cux3h26GS61hr2PIJE=) 2026-02-13 02:42:22.049513 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPd34REYqAdyEWmzZf+o9M07xOc0RMOVnHFYgp9Wt82ZWwt1aakN+qqFfOd2z4YyqZkHEJaYhpvAKQMbovLyG3c=) 2026-02-13 02:42:22.049530 | orchestrator | 2026-02-13 02:42:22.049542 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:22.049553 | orchestrator | Friday 13 February 2026 02:42:11 +0000 (0:00:01.064) 0:00:12.490 ******* 2026-02-13 02:42:22.049564 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCehI2r9UVn1YOubJhqDkBeBACsxZqUHln2K3USR5+frXv6k/JFjtDw2X2eO23vLAkeLY3zx2Bkd5QtIfGFMCplqmSe0ZEBnrkYbind8SRrcDyX1ODo5BpYm5grinWx6Srt7EQqspOuVRgG/M5I60kIq/QzqMdkxrkp1hGRSxb3xzu5os1oyLujFEmqTE4SZm8yQ/Q41Tg/FOY2U8dP3CNLamFwzCer8aocucD8Km+VA2wAwPCo1OhDFRWyglGOg+eCFHZKMb3e9d5xgldEsv9SW3k0/mIiXmWODu5MkF1K3jCIHoWZ7uz26/kJvvxnSCNv6AhtLYhW3UV/d/1COYKMAxkGXB65vYuVrG49ZNozSYCW0Onp5+3DGK/TrwKSGsGmWNlTh7fOhvZIZ2ZavXYhOaVBWaDqwGgFzMeB5NvtVB2OnbxPiICZgR+/tUwUUxL1MmEo3Ad+0O/wrc62bXzl4VyUen8/SA6r54vRZKLjGbh4asiXdgYFs7OrquekYOs=) 2026-02-13 02:42:22.049575 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF25PpWeSSiicEOpk/EXFo8Kb2plZ+OCDScK0JqjWf+Xydy3PbgY7IwiAmZ57UTg9kRBEQWNVBoJKJoMr4oDrKQ=) 2026-02-13 02:42:22.049609 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRGCpmE0U0hfzgB2gKz+8uDyPFei3mrERPLSDWHkqHd) 2026-02-13 02:42:22.049621 | orchestrator | 2026-02-13 02:42:22.049631 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-13 02:42:22.049641 | orchestrator | Friday 13 February 2026 02:42:12 +0000 (0:00:01.078) 0:00:13.569 ******* 2026-02-13 02:42:22.049651 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-13 02:42:22.049661 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-13 02:42:22.049671 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-13 02:42:22.049680 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-13 02:42:22.049689 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-13 02:42:22.049699 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-13 02:42:22.049708 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-13 02:42:22.049718 | orchestrator | 2026-02-13 02:42:22.049727 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-13 02:42:22.049738 | orchestrator | Friday 13 February 2026 02:42:17 +0000 (0:00:05.190) 0:00:18.760 ******* 2026-02-13 02:42:22.049748 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-13 02:42:22.049760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-13 02:42:22.049770 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-13 02:42:22.049779 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-13 02:42:22.049789 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-13 02:42:22.049799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-13 02:42:22.049808 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-13 02:42:22.049818 | orchestrator | 2026-02-13 02:42:22.049840 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:22.049851 | orchestrator | Friday 13 February 2026 02:42:17 +0000 (0:00:00.181) 0:00:18.941 ******* 2026-02-13 02:42:22.049860 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJJKc1LeEAJN1+Q+2kctfYYrk33j3tS4n0C992waQMs0) 2026-02-13 02:42:22.049888 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD4r679HCbyd3ftzr0hbGaXIy43bRvaagZmQTiGxUn5DBgpOU7mG/1ZtTSf6p2umRgpocQvESvE4aB4/qo359DZcMsyEABmZLOBO0JvoIWoZqIxZmoe4dH/p3OT8SWY5REFdZrNlfAKojKyeW/tyGG3A8vzWcmCHqmKOhYVOCWOqjwi/wnRl15nWjdk124H7NgKcmCxsxe2ipp0qRMOBqwXQAKALyMJ/DGNbAdCuPKV+M19FCazoROqXgBGTsq7/H8qyK7Lpw0LjzcbTxropzj5Fzuzq3FdIuvOFIxtaJAv2/+f82NF3S+W7sGQuULOea6KCiV3ZyVBmtN4I93pGL7WzFWToHxxcw2UVsy7E66YJvT5NO8tawrvqZ/cK1X1wPjhLbreT2PYDeYttH9ALJiKQ60bbZ5cF7mxNZq5ammn2lw+/CeBz5IMFB2Zif6s9NX+y3scKJAi1ZsOycjT3WUMBVm1wnkZtjTsLREVFlhTkIGrOrpIAHSTubVv4x/E1Nk=) 2026-02-13 02:42:22.049899 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGjMsrH4nq1zLeIGs6nslIaeT4g+n/koan0t8KXEZrNizrMNLovpTf72IPU1FJesKx6wv3mlavAdZscpArDs6Y=) 2026-02-13 02:42:22.049944 | orchestrator | 2026-02-13 02:42:22.049956 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:22.049968 | orchestrator | Friday 13 February 2026 02:42:18 +0000 (0:00:01.062) 0:00:20.003 ******* 2026-02-13 02:42:22.049985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBP3zuaoQlkJTXqfpi7F4iSwsomQg+QSJ+XGPIR88gWQ5x9i1jsBwWiqT6J535Z7mrMa7fw/KNddnmFN9Gsb4aj0=) 2026-02-13 02:42:22.049997 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFsgvutxP6m4MBMoGI8Lp3sOUj6Mk894huP1d0OEPE3+) 2026-02-13 02:42:22.050009 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqq2IAdsOyqnYq+TxDx1n+kX/XYyHLJF0TNxsba9wPpyYhfH4kSA5+NgCQpoPYij0qVt6LCs80Jei9mcEvmYHvTIc3D7IKe8elwImYgv+3s2VhRbtn3W8xM8LptPhJBCnl/c4tH0r2HM0ZKUu5DHLRP2TmjdNrEpu3nsEHdWIZBbnC+CZ6+vzni1DkjX7RJTIXiVrjMfeH5+A/NvdxY49zKrTzsd0HMTAd9n4uARDuIYdIzFsmir0WcFJS5MovnBC4VJi7kryt5slOFYEczyQnTHsQrDIkVUYfgkDTMXgYyj8tDSVs4WRZRG484aP8X+AfHjS8tnfojNzJVVT3+pTdthKEITwSoeU1tXckvMsQWCZ0pXujin/Hq8fPYS2P4dKsd/Og1l8ONQmyg81Lf2fGlpStEzbBMWl6d6m9jWc3Q4l/HbZ6K545+0VprProaelEOV6tHbtyThPq/iTwVJvdarxeleFhgcODuTetyX6BLus+xnKAIWIRjh8PmlqM+Fs=) 2026-02-13 02:42:22.050077 | orchestrator | 2026-02-13 02:42:22.050088 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:22.050098 | orchestrator | Friday 13 February 2026 02:42:19 +0000 (0:00:01.102) 0:00:21.106 ******* 2026-02-13 02:42:22.050107 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvSiAYM6d8pTrkyZdItiv7kETrTz6i1RjNiJpzG1suoHtukoX04AEGjKVU5WY1pgiWT3ZfPNCpQ8XRlJSjnuKkgx2+iTc3QkS+cro5GIsDxGhjDMA+4/aR4tPecnmgs0LBWV9yVmDbcQ/+KVnJUR1EPqTfY/y5+444s16CDBMqVlAcadeLnR8YXD21CxQAxcQ4qoQ/bPCZv7xStHVXOfEIocDgGFrR0Dq9oZ27mnsZ0UWGbYUDrQeVhwGL/P5g4USTZp8BNuegRenulS0faWepsr3scJLIRZNGIzdzjoXamwibluGmdytsNR7dlIRGDUGqKjdyZXrW+vyzVAtt9Hci7y37nKoR2Z/7Q5z4Tf5xiMiuR6Jgeewhk0dflm9ueKtvsgOj5Q+BW/A9tNze2KpmvomFlUKivsB/Lr01XDqrDw77kSiK99nRlztF1A6wrbO+LEL6jc9/sbOW/YhoMXylza1Uco+KcVF2IZJ9EDTVc8qnIXRNpC1sSOjm8WFP3AU=) 2026-02-13 02:42:22.050118 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE+jhZ8MiPIBxT5vxliKQuxfW8zyfJXtg5JuJ2P4dvskWJyhzs1ehfYcOO8HIOzLpuB/8TRoWNbEudP5g7fXr+Y=) 2026-02-13 02:42:22.050127 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIullzqz5EbudNRTwnxbSr78UiJB0JM7MM4ga1cMpFtI) 2026-02-13 02:42:22.050137 | orchestrator | 2026-02-13 02:42:22.050147 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:22.050156 | orchestrator | Friday 13 February 2026 02:42:20 +0000 (0:00:01.041) 0:00:22.147 ******* 2026-02-13 02:42:22.050174 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAID7RfKe7PODIUXjfeSInT53+G/mqam4C8xb0Tz0s6GQJ) 2026-02-13 02:42:22.050202 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbRTOKf7NTeNs9wOdKKeM88Ggas2Y6YATnjV7kXOFZZbBNw52bdbug62x+f21cFocecXzW4k8+ZmVqTXvjqoWE2NICzC7lo1tMhNw0BYW54F4yX8+O3rN0EKwKMVsJRiGdIYLzvgn2lnBhdxFrI/Ktirr1r+bTT/LVja78p+KDxenzj7kUPony6wuQMoR7I+2FN4tjStec7EljgkWMSGyAXRjDJJtfI09QItu9kdosC+UznKLveLIvU2xLdHhF0gYYQ6iF7nWjzagDdkTrthhwuV4BpW6lX7lpgY2aPh0yrt68adsMpLBlj3TnA6hfdA0740fo8/Ssn1wRepCsq2VFPQ0EOpqFosZ5jGj7ncJpEApNVF+a3rq6kAwMX5p54Or3kbRDt+AxuZfTccWVcSOPh+W5hChrWuxM+NuTfUrrjP1K8vJ3djKqBxEy5FVtx23TUH5Lr4TUggyWL1rzLfhWLswdXfpOmYrXZCF6TMBUFllerrhuR6cXzy4tpwErV/c=) 2026-02-13 02:42:26.444464 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPFLfOGdv3jZ9QJXhOH7mdwxIra59q0BL9Z+XpTy0YPI0GnPfY5dqTuscWEA4AymQjgg2b9qcX5yFSIpas5DDZc=) 2026-02-13 02:42:26.444566 | orchestrator | 2026-02-13 02:42:26.444606 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:26.444619 | orchestrator | Friday 13 February 2026 02:42:22 +0000 (0:00:01.079) 0:00:23.227 ******* 2026-02-13 02:42:26.444632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCR88I9l6r2RkwRos7udv1d2ALHirSTsAneDqyM6GIpy/6dVJd/yL05ycFfWEKF02LArtVhKD15AE52NUZx61ueF465Qd02TRwBH8H+xGMSc3/9d0aKsJBjHA+tMNw0zLALKTtos1lGOyAIfLUxVtVEZyPHIw0jfHqnYQdxGwI2RO1rKPYTouaHfEqhg3zdx495eKnU6HL9NkhBK0yCCI1Lqeu4nR6+iz9RNwwjCBdpDRT5vmo22N15ePtw1vWvU7qwpIpFXpbd7al75+/FfZikH5zibOGhD3O7jJhY1HiESOQg42krkxJxnbf3CxktqSZpf1B3F4Swy39TVS60phC1axXMLwN8V8nJmYgx1bHitY7TA1bD/cOvJ6lFIqSUqyXBkbfUsF2entRmWF3B1OtY+s/fFBd2LTY3b4YTjUGI5zkq3ABT8IPU5m9k1355xQFyORiB88eJjM6dlLbqfI1JJiDfGwJx1cAbU103GKq7fAwkWYfyS/vYmBoWuZoTCG0=) 2026-02-13 02:42:26.444647 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBEqGPDMxegw+rHCBrLDzBEUvXoAaQaMSC+P8hKJG6iOu8zhAZSTy9jNBoQ0DhLO//uSyP8A8Hbq3hLYIdrqccw=) 2026-02-13 02:42:26.444659 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDp4c/Meuy2N/bh8gja3/+qrxsX0XRtdtQvBjLXaIqNP) 2026-02-13 02:42:26.444671 | orchestrator | 2026-02-13 02:42:26.444682 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:26.444693 | orchestrator | Friday 13 February 2026 02:42:23 +0000 (0:00:01.060) 0:00:24.287 ******* 2026-02-13 02:42:26.444704 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMDSqYFeoZB/aPMIZaVFyhZi9eHTBVg3jHFIsh9kZb/h) 2026-02-13 02:42:26.444715 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4OFol91iUkZovqRrEPjTT9DNh8yVBV/m1/l5DiglOZ04IDXXjJBtXyiCt0JYC+awKOwlekXeWt3cqgjhMTjq9R1PfLkHthl72pba3nyWqRxWRD4QiPPrJ4kiZOpLIU4tYXYmbsL2+0fHUuvpuDKlOD8GP2c0LuUHu33qqrKBS88Ky+Kl6FwZVMBmt1fzyOKsCarCj6bx5mx0DNBopcOlmM3aill1WoZizgy4Rlvd3LU16WRM7zp78PbqmKlXS+EaaU7yr48hZnzt0878v3EMWRTIiwgjTegr8ux6TGAS2/Tvb8MAm4NirqSMMjwYuUz1U3CN5TOmkNfFTZhJi0dQ6S3IMj/fsM4kbW14naz2hc2v71VxAIPJ0ZS0+sZgE9B3h4uRWwUtTsRaIZFn2sLZC7qIv5COK8UMEdcSq4KSRy2wArOrmXVOG3vTObZf4c7OqAt9JvAtpmQWb0/Ap+O1rhTJllAwoCfd7gO0GMP+8Dyn++cux3h26GS61hr2PIJE=) 2026-02-13 02:42:26.444727 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPd34REYqAdyEWmzZf+o9M07xOc0RMOVnHFYgp9Wt82ZWwt1aakN+qqFfOd2z4YyqZkHEJaYhpvAKQMbovLyG3c=) 2026-02-13 02:42:26.444738 | orchestrator | 2026-02-13 02:42:26.444749 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-13 02:42:26.444759 | orchestrator | Friday 13 February 2026 02:42:24 +0000 (0:00:01.041) 0:00:25.328 ******* 2026-02-13 02:42:26.444787 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCehI2r9UVn1YOubJhqDkBeBACsxZqUHln2K3USR5+frXv6k/JFjtDw2X2eO23vLAkeLY3zx2Bkd5QtIfGFMCplqmSe0ZEBnrkYbind8SRrcDyX1ODo5BpYm5grinWx6Srt7EQqspOuVRgG/M5I60kIq/QzqMdkxrkp1hGRSxb3xzu5os1oyLujFEmqTE4SZm8yQ/Q41Tg/FOY2U8dP3CNLamFwzCer8aocucD8Km+VA2wAwPCo1OhDFRWyglGOg+eCFHZKMb3e9d5xgldEsv9SW3k0/mIiXmWODu5MkF1K3jCIHoWZ7uz26/kJvvxnSCNv6AhtLYhW3UV/d/1COYKMAxkGXB65vYuVrG49ZNozSYCW0Onp5+3DGK/TrwKSGsGmWNlTh7fOhvZIZ2ZavXYhOaVBWaDqwGgFzMeB5NvtVB2OnbxPiICZgR+/tUwUUxL1MmEo3Ad+0O/wrc62bXzl4VyUen8/SA6r54vRZKLjGbh4asiXdgYFs7OrquekYOs=) 2026-02-13 02:42:26.444800 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF25PpWeSSiicEOpk/EXFo8Kb2plZ+OCDScK0JqjWf+Xydy3PbgY7IwiAmZ57UTg9kRBEQWNVBoJKJoMr4oDrKQ=) 2026-02-13 02:42:26.444812 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGRGCpmE0U0hfzgB2gKz+8uDyPFei3mrERPLSDWHkqHd) 2026-02-13 02:42:26.444823 | orchestrator | 2026-02-13 02:42:26.444834 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-13 02:42:26.444845 | orchestrator | Friday 13 February 2026 02:42:25 +0000 (0:00:01.074) 0:00:26.402 ******* 2026-02-13 02:42:26.444863 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-13 02:42:26.444875 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-13 02:42:26.444886 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-13 02:42:26.444897 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-13 02:42:26.444956 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-13 02:42:26.444970 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-13 02:42:26.444980 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-13 02:42:26.444991 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:42:26.445002 | orchestrator | 2026-02-13 02:42:26.445013 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-13 02:42:26.445024 | orchestrator | Friday 13 February 2026 02:42:25 +0000 (0:00:00.167) 0:00:26.570 ******* 2026-02-13 02:42:26.445035 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:42:26.445045 | orchestrator | 2026-02-13 02:42:26.445056 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-13 02:42:26.445067 | orchestrator | Friday 13 February 2026 02:42:25 +0000 (0:00:00.054) 0:00:26.624 ******* 2026-02-13 02:42:26.445077 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:42:26.445088 | orchestrator | 2026-02-13 02:42:26.445099 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-13 02:42:26.445109 | orchestrator | Friday 13 February 2026 02:42:25 +0000 (0:00:00.051) 0:00:26.676 ******* 2026-02-13 02:42:26.445120 | orchestrator | changed: [testbed-manager] 2026-02-13 02:42:26.445130 | orchestrator | 2026-02-13 02:42:26.445141 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:42:26.445152 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 02:42:26.445164 | orchestrator | 2026-02-13 02:42:26.445175 | orchestrator | 2026-02-13 02:42:26.445186 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:42:26.445202 | orchestrator | Friday 13 February 2026 02:42:26 +0000 (0:00:00.765) 0:00:27.441 ******* 2026-02-13 02:42:26.445213 | orchestrator | =============================================================================== 2026-02-13 02:42:26.445224 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.74s 2026-02-13 02:42:26.445234 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.19s 2026-02-13 02:42:26.445245 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-13 02:42:26.445256 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-02-13 02:42:26.445267 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-13 02:42:26.445277 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-02-13 02:42:26.445288 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-02-13 02:42:26.445298 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-13 02:42:26.445309 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-13 02:42:26.445320 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-13 02:42:26.445330 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2026-02-13 02:42:26.445340 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-13 02:42:26.445351 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-02-13 02:42:26.445361 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-13 02:42:26.445372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2026-02-13 02:42:26.445389 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-02-13 02:42:26.445400 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.77s 2026-02-13 02:42:26.445410 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.18s 2026-02-13 02:42:26.445421 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2026-02-13 02:42:26.445432 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-02-13 02:42:26.772873 | orchestrator | + osism apply squid 2026-02-13 02:42:38.691853 | orchestrator | 2026-02-13 02:42:38 | INFO  | Task 4e2b7085-8928-40ae-b2c3-c759d7022de1 (squid) was prepared for execution. 2026-02-13 02:42:38.692070 | orchestrator | 2026-02-13 02:42:38 | INFO  | It takes a moment until task 4e2b7085-8928-40ae-b2c3-c759d7022de1 (squid) has been started and output is visible here. 2026-02-13 02:44:34.634860 | orchestrator | 2026-02-13 02:44:34.635008 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-13 02:44:34.635024 | orchestrator | 2026-02-13 02:44:34.635037 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-13 02:44:34.635048 | orchestrator | Friday 13 February 2026 02:42:42 +0000 (0:00:00.119) 0:00:00.119 ******* 2026-02-13 02:44:34.635060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 02:44:34.635072 | orchestrator | 2026-02-13 02:44:34.635083 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-13 02:44:34.635094 | orchestrator | Friday 13 February 2026 02:42:42 +0000 (0:00:00.062) 0:00:00.181 ******* 2026-02-13 02:44:34.635106 | orchestrator | ok: [testbed-manager] 2026-02-13 02:44:34.635117 | orchestrator | 2026-02-13 02:44:34.635128 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-13 02:44:34.635139 | orchestrator | Friday 13 February 2026 02:42:43 +0000 (0:00:01.181) 0:00:01.363 ******* 2026-02-13 02:44:34.635151 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-13 02:44:34.635162 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-13 02:44:34.635173 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-13 02:44:34.635183 | orchestrator | 2026-02-13 02:44:34.635194 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-13 02:44:34.635205 | orchestrator | Friday 13 February 2026 02:42:44 +0000 (0:00:01.029) 0:00:02.392 ******* 2026-02-13 02:44:34.635215 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-13 02:44:34.635256 | orchestrator | 2026-02-13 02:44:34.635267 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-13 02:44:34.635278 | orchestrator | Friday 13 February 2026 02:42:45 +0000 (0:00:00.942) 0:00:03.335 ******* 2026-02-13 02:44:34.635289 | orchestrator | ok: [testbed-manager] 2026-02-13 02:44:34.635299 | orchestrator | 2026-02-13 02:44:34.635310 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-13 02:44:34.635321 | orchestrator | Friday 13 February 2026 02:42:46 +0000 (0:00:00.320) 0:00:03.655 ******* 2026-02-13 02:44:34.635332 | orchestrator | changed: [testbed-manager] 2026-02-13 02:44:34.635344 | orchestrator | 2026-02-13 02:44:34.635354 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-13 02:44:34.635365 | orchestrator | Friday 13 February 2026 02:42:46 +0000 (0:00:00.854) 0:00:04.509 ******* 2026-02-13 02:44:34.635376 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-13 02:44:34.635388 | orchestrator | ok: [testbed-manager] 2026-02-13 02:44:34.635402 | orchestrator | 2026-02-13 02:44:34.635414 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-13 02:44:34.635425 | orchestrator | Friday 13 February 2026 02:43:21 +0000 (0:00:34.640) 0:00:39.149 ******* 2026-02-13 02:44:34.635463 | orchestrator | changed: [testbed-manager] 2026-02-13 02:44:34.635475 | orchestrator | 2026-02-13 02:44:34.635486 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-13 02:44:34.635497 | orchestrator | Friday 13 February 2026 02:43:33 +0000 (0:00:12.001) 0:00:51.151 ******* 2026-02-13 02:44:34.635508 | orchestrator | Pausing for 60 seconds 2026-02-13 02:44:34.635519 | orchestrator | changed: [testbed-manager] 2026-02-13 02:44:34.635529 | orchestrator | 2026-02-13 02:44:34.635540 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-13 02:44:34.635551 | orchestrator | Friday 13 February 2026 02:44:33 +0000 (0:01:00.084) 0:01:51.235 ******* 2026-02-13 02:44:34.635561 | orchestrator | ok: [testbed-manager] 2026-02-13 02:44:34.635572 | orchestrator | 2026-02-13 02:44:34.635583 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-13 02:44:34.635593 | orchestrator | Friday 13 February 2026 02:44:33 +0000 (0:00:00.063) 0:01:51.299 ******* 2026-02-13 02:44:34.635604 | orchestrator | changed: [testbed-manager] 2026-02-13 02:44:34.635615 | orchestrator | 2026-02-13 02:44:34.635625 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:44:34.635636 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:44:34.635647 | orchestrator | 2026-02-13 02:44:34.635658 | orchestrator | 2026-02-13 02:44:34.635668 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:44:34.635679 | orchestrator | Friday 13 February 2026 02:44:34 +0000 (0:00:00.619) 0:01:51.919 ******* 2026-02-13 02:44:34.635690 | orchestrator | =============================================================================== 2026-02-13 02:44:34.635718 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-13 02:44:34.635729 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.64s 2026-02-13 02:44:34.635740 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-02-13 02:44:34.635750 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.18s 2026-02-13 02:44:34.635761 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.03s 2026-02-13 02:44:34.635771 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.94s 2026-02-13 02:44:34.635782 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.85s 2026-02-13 02:44:34.635792 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2026-02-13 02:44:34.635802 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.32s 2026-02-13 02:44:34.635813 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2026-02-13 02:44:34.635823 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2026-02-13 02:44:34.977499 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-13 02:44:34.977594 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-13 02:44:35.022294 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 02:44:35.022398 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-13 02:44:35.029389 | orchestrator | + set -e 2026-02-13 02:44:35.029435 | orchestrator | + NAMESPACE=kolla/release 2026-02-13 02:44:35.029456 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-13 02:44:35.036505 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-13 02:44:35.108270 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-13 02:44:35.109564 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-13 02:44:47.105681 | orchestrator | 2026-02-13 02:44:47 | INFO  | Task ea619c33-82c9-4be8-bfb8-3112987426ba (operator) was prepared for execution. 2026-02-13 02:44:47.105801 | orchestrator | 2026-02-13 02:44:47 | INFO  | It takes a moment until task ea619c33-82c9-4be8-bfb8-3112987426ba (operator) has been started and output is visible here. 2026-02-13 02:45:03.849238 | orchestrator | 2026-02-13 02:45:03.849407 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-13 02:45:03.849454 | orchestrator | 2026-02-13 02:45:03.849467 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 02:45:03.849479 | orchestrator | Friday 13 February 2026 02:44:51 +0000 (0:00:00.134) 0:00:00.134 ******* 2026-02-13 02:45:03.849490 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:45:03.849502 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:45:03.849513 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:45:03.849524 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:45:03.849534 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:45:03.849545 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:45:03.849555 | orchestrator | 2026-02-13 02:45:03.849566 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-13 02:45:03.849577 | orchestrator | Friday 13 February 2026 02:44:55 +0000 (0:00:04.216) 0:00:04.350 ******* 2026-02-13 02:45:03.849588 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:45:03.849599 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:45:03.849609 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:45:03.849620 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:45:03.849630 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:45:03.849641 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:45:03.849651 | orchestrator | 2026-02-13 02:45:03.849662 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-13 02:45:03.849673 | orchestrator | 2026-02-13 02:45:03.849683 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-13 02:45:03.849694 | orchestrator | Friday 13 February 2026 02:44:56 +0000 (0:00:00.766) 0:00:05.117 ******* 2026-02-13 02:45:03.849705 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:45:03.849715 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:45:03.849726 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:45:03.849736 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:45:03.849747 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:45:03.849757 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:45:03.849769 | orchestrator | 2026-02-13 02:45:03.849783 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-13 02:45:03.849810 | orchestrator | Friday 13 February 2026 02:44:56 +0000 (0:00:00.168) 0:00:05.285 ******* 2026-02-13 02:45:03.849823 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:45:03.849835 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:45:03.849849 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:45:03.849862 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:45:03.849874 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:45:03.849887 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:45:03.849899 | orchestrator | 2026-02-13 02:45:03.849909 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-13 02:45:03.849920 | orchestrator | Friday 13 February 2026 02:44:56 +0000 (0:00:00.289) 0:00:05.575 ******* 2026-02-13 02:45:03.849931 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:03.849943 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:03.849954 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:03.849964 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:03.849975 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:03.849986 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:03.849997 | orchestrator | 2026-02-13 02:45:03.850007 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-13 02:45:03.850073 | orchestrator | Friday 13 February 2026 02:44:57 +0000 (0:00:00.619) 0:00:06.194 ******* 2026-02-13 02:45:03.850085 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:03.850096 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:03.850107 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:03.850118 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:03.850128 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:03.850139 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:03.850150 | orchestrator | 2026-02-13 02:45:03.850161 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-13 02:45:03.850180 | orchestrator | Friday 13 February 2026 02:44:58 +0000 (0:00:00.808) 0:00:07.003 ******* 2026-02-13 02:45:03.850191 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-13 02:45:03.850202 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-13 02:45:03.850213 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-13 02:45:03.850223 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-13 02:45:03.850234 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-13 02:45:03.850245 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-13 02:45:03.850256 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-13 02:45:03.850266 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-13 02:45:03.850277 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-13 02:45:03.850313 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-13 02:45:03.850332 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-13 02:45:03.850350 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-13 02:45:03.850367 | orchestrator | 2026-02-13 02:45:03.850387 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-13 02:45:03.850406 | orchestrator | Friday 13 February 2026 02:44:59 +0000 (0:00:01.212) 0:00:08.215 ******* 2026-02-13 02:45:03.850424 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:03.850438 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:03.850449 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:03.850460 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:03.850470 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:03.850481 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:03.850492 | orchestrator | 2026-02-13 02:45:03.850503 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-13 02:45:03.850515 | orchestrator | Friday 13 February 2026 02:45:00 +0000 (0:00:01.239) 0:00:09.454 ******* 2026-02-13 02:45:03.850526 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-13 02:45:03.850537 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-13 02:45:03.850548 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-13 02:45:03.850559 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850588 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850599 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850610 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850621 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850631 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-13 02:45:03.850642 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850653 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850663 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850674 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850684 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850695 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-13 02:45:03.850706 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850716 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850727 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850737 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850748 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850759 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-13 02:45:03.850778 | orchestrator | 2026-02-13 02:45:03.850789 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-13 02:45:03.850801 | orchestrator | Friday 13 February 2026 02:45:01 +0000 (0:00:01.208) 0:00:10.663 ******* 2026-02-13 02:45:03.850811 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:03.850823 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:03.850833 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:03.850844 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:03.850855 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:03.850865 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:03.850876 | orchestrator | 2026-02-13 02:45:03.850887 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-13 02:45:03.850898 | orchestrator | Friday 13 February 2026 02:45:01 +0000 (0:00:00.181) 0:00:10.845 ******* 2026-02-13 02:45:03.850909 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:03.850919 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:03.850930 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:03.850941 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:03.850951 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:03.850962 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:03.850973 | orchestrator | 2026-02-13 02:45:03.850984 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-13 02:45:03.850994 | orchestrator | Friday 13 February 2026 02:45:02 +0000 (0:00:00.172) 0:00:11.018 ******* 2026-02-13 02:45:03.851005 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:03.851016 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:03.851026 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:03.851037 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:03.851048 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:03.851058 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:03.851069 | orchestrator | 2026-02-13 02:45:03.851080 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-13 02:45:03.851091 | orchestrator | Friday 13 February 2026 02:45:02 +0000 (0:00:00.570) 0:00:11.588 ******* 2026-02-13 02:45:03.851101 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:03.851112 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:03.851123 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:03.851133 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:03.851153 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:03.851164 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:03.851174 | orchestrator | 2026-02-13 02:45:03.851185 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-13 02:45:03.851196 | orchestrator | Friday 13 February 2026 02:45:02 +0000 (0:00:00.180) 0:00:11.769 ******* 2026-02-13 02:45:03.851207 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 02:45:03.851217 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:03.851227 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-13 02:45:03.851238 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:03.851249 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 02:45:03.851259 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:03.851269 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 02:45:03.851280 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:03.851318 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-13 02:45:03.851329 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:03.851340 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 02:45:03.851350 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:03.851361 | orchestrator | 2026-02-13 02:45:03.851372 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-13 02:45:03.851382 | orchestrator | Friday 13 February 2026 02:45:03 +0000 (0:00:00.732) 0:00:12.502 ******* 2026-02-13 02:45:03.851393 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:03.851410 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:03.851421 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:03.851431 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:03.851442 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:03.851452 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:03.851463 | orchestrator | 2026-02-13 02:45:03.851474 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-13 02:45:03.851485 | orchestrator | Friday 13 February 2026 02:45:03 +0000 (0:00:00.177) 0:00:12.679 ******* 2026-02-13 02:45:03.851495 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:03.851506 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:03.851517 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:03.851527 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:03.851545 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:05.235481 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:05.235580 | orchestrator | 2026-02-13 02:45:05.235596 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-13 02:45:05.235609 | orchestrator | Friday 13 February 2026 02:45:03 +0000 (0:00:00.159) 0:00:12.838 ******* 2026-02-13 02:45:05.235621 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:05.235632 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:05.235643 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:05.235654 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:05.235664 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:05.235675 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:05.235686 | orchestrator | 2026-02-13 02:45:05.235697 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-13 02:45:05.235708 | orchestrator | Friday 13 February 2026 02:45:04 +0000 (0:00:00.173) 0:00:13.012 ******* 2026-02-13 02:45:05.235719 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:45:05.235730 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:45:05.235740 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:45:05.235757 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:45:05.235780 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:45:05.235807 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:45:05.235824 | orchestrator | 2026-02-13 02:45:05.235842 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-13 02:45:05.235862 | orchestrator | Friday 13 February 2026 02:45:04 +0000 (0:00:00.683) 0:00:13.695 ******* 2026-02-13 02:45:05.235880 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:45:05.235899 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:45:05.235913 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:45:05.235924 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:45:05.235935 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:45:05.235945 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:45:05.235956 | orchestrator | 2026-02-13 02:45:05.235967 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:45:05.235996 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236009 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236020 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236033 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236046 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236058 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 02:45:05.236095 | orchestrator | 2026-02-13 02:45:05.236108 | orchestrator | 2026-02-13 02:45:05.236121 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:45:05.236132 | orchestrator | Friday 13 February 2026 02:45:04 +0000 (0:00:00.249) 0:00:13.945 ******* 2026-02-13 02:45:05.236143 | orchestrator | =============================================================================== 2026-02-13 02:45:05.236153 | orchestrator | Gathering Facts --------------------------------------------------------- 4.22s 2026-02-13 02:45:05.236164 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2026-02-13 02:45:05.236175 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2026-02-13 02:45:05.236186 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.21s 2026-02-13 02:45:05.236197 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.81s 2026-02-13 02:45:05.236208 | orchestrator | Do not require tty for all users ---------------------------------------- 0.77s 2026-02-13 02:45:05.236218 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.73s 2026-02-13 02:45:05.236228 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.68s 2026-02-13 02:45:05.236239 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.62s 2026-02-13 02:45:05.236250 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2026-02-13 02:45:05.236260 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.29s 2026-02-13 02:45:05.236271 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2026-02-13 02:45:05.236281 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-13 02:45:05.236329 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.18s 2026-02-13 02:45:05.236340 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2026-02-13 02:45:05.236351 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2026-02-13 02:45:05.236362 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.17s 2026-02-13 02:45:05.236372 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2026-02-13 02:45:05.236383 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2026-02-13 02:45:05.552493 | orchestrator | + osism apply --environment custom facts 2026-02-13 02:45:07.472561 | orchestrator | 2026-02-13 02:45:07 | INFO  | Trying to run play facts in environment custom 2026-02-13 02:45:17.617990 | orchestrator | 2026-02-13 02:45:17 | INFO  | Task d5e3561c-4b91-4aa8-9404-e0bdfd9688a9 (facts) was prepared for execution. 2026-02-13 02:45:17.618163 | orchestrator | 2026-02-13 02:45:17 | INFO  | It takes a moment until task d5e3561c-4b91-4aa8-9404-e0bdfd9688a9 (facts) has been started and output is visible here. 2026-02-13 02:46:00.805060 | orchestrator | 2026-02-13 02:46:00.805213 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-13 02:46:00.805248 | orchestrator | 2026-02-13 02:46:00.805269 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-13 02:46:00.805289 | orchestrator | Friday 13 February 2026 02:45:21 +0000 (0:00:00.059) 0:00:00.059 ******* 2026-02-13 02:46:00.805301 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:00.805313 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.805326 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:00.805337 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:00.805347 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.805364 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:00.805382 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.805400 | orchestrator | 2026-02-13 02:46:00.805473 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-13 02:46:00.805511 | orchestrator | Friday 13 February 2026 02:45:22 +0000 (0:00:01.354) 0:00:01.414 ******* 2026-02-13 02:46:00.805523 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:00.805534 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:00.805546 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:00.805567 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.805588 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.805601 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.805615 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:00.805627 | orchestrator | 2026-02-13 02:46:00.805639 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-13 02:46:00.805652 | orchestrator | 2026-02-13 02:46:00.805665 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-13 02:46:00.805678 | orchestrator | Friday 13 February 2026 02:45:23 +0000 (0:00:01.109) 0:00:02.523 ******* 2026-02-13 02:46:00.805690 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.805702 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.805719 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.805739 | orchestrator | 2026-02-13 02:46:00.805758 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-13 02:46:00.805772 | orchestrator | Friday 13 February 2026 02:45:23 +0000 (0:00:00.087) 0:00:02.610 ******* 2026-02-13 02:46:00.805784 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.805796 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.805809 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.805821 | orchestrator | 2026-02-13 02:46:00.805834 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-13 02:46:00.805847 | orchestrator | Friday 13 February 2026 02:45:23 +0000 (0:00:00.174) 0:00:02.785 ******* 2026-02-13 02:46:00.805860 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.805892 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.805904 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.805916 | orchestrator | 2026-02-13 02:46:00.805928 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-13 02:46:00.805939 | orchestrator | Friday 13 February 2026 02:45:24 +0000 (0:00:00.176) 0:00:02.961 ******* 2026-02-13 02:46:00.805963 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:46:00.805976 | orchestrator | 2026-02-13 02:46:00.805987 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-13 02:46:00.805998 | orchestrator | Friday 13 February 2026 02:45:24 +0000 (0:00:00.129) 0:00:03.090 ******* 2026-02-13 02:46:00.806009 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.806080 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.806091 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.806102 | orchestrator | 2026-02-13 02:46:00.806113 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-13 02:46:00.806124 | orchestrator | Friday 13 February 2026 02:45:24 +0000 (0:00:00.414) 0:00:03.504 ******* 2026-02-13 02:46:00.806138 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:00.806158 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:00.806176 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:00.806187 | orchestrator | 2026-02-13 02:46:00.806198 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-13 02:46:00.806209 | orchestrator | Friday 13 February 2026 02:45:24 +0000 (0:00:00.121) 0:00:03.626 ******* 2026-02-13 02:46:00.806219 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.806230 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.806240 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.806251 | orchestrator | 2026-02-13 02:46:00.806262 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-13 02:46:00.806272 | orchestrator | Friday 13 February 2026 02:45:25 +0000 (0:00:00.999) 0:00:04.625 ******* 2026-02-13 02:46:00.806294 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.806304 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.806315 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.806326 | orchestrator | 2026-02-13 02:46:00.806387 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-13 02:46:00.806401 | orchestrator | Friday 13 February 2026 02:45:26 +0000 (0:00:00.474) 0:00:05.100 ******* 2026-02-13 02:46:00.806454 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.806467 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.806478 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.806488 | orchestrator | 2026-02-13 02:46:00.806499 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-13 02:46:00.806510 | orchestrator | Friday 13 February 2026 02:45:27 +0000 (0:00:01.127) 0:00:06.228 ******* 2026-02-13 02:46:00.806521 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.806531 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.806542 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.806552 | orchestrator | 2026-02-13 02:46:00.806563 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-13 02:46:00.806574 | orchestrator | Friday 13 February 2026 02:45:43 +0000 (0:00:15.739) 0:00:21.967 ******* 2026-02-13 02:46:00.806584 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:00.806595 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:00.806606 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:00.806616 | orchestrator | 2026-02-13 02:46:00.806627 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-13 02:46:00.806660 | orchestrator | Friday 13 February 2026 02:45:43 +0000 (0:00:00.104) 0:00:22.071 ******* 2026-02-13 02:46:00.806672 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:00.806683 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:00.806693 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:00.806704 | orchestrator | 2026-02-13 02:46:00.806716 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-13 02:46:00.806734 | orchestrator | Friday 13 February 2026 02:45:50 +0000 (0:00:07.503) 0:00:29.575 ******* 2026-02-13 02:46:00.806825 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.806852 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.806872 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.806891 | orchestrator | 2026-02-13 02:46:00.806910 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-13 02:46:00.806930 | orchestrator | Friday 13 February 2026 02:45:51 +0000 (0:00:00.454) 0:00:30.030 ******* 2026-02-13 02:46:00.806951 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-13 02:46:00.806970 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-13 02:46:00.806990 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-13 02:46:00.807009 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-13 02:46:00.807038 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-13 02:46:00.807050 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-13 02:46:00.807061 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-13 02:46:00.807071 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-13 02:46:00.807082 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-13 02:46:00.807093 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-13 02:46:00.807103 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-13 02:46:00.807114 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-13 02:46:00.807125 | orchestrator | 2026-02-13 02:46:00.807135 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-13 02:46:00.807146 | orchestrator | Friday 13 February 2026 02:45:54 +0000 (0:00:03.471) 0:00:33.501 ******* 2026-02-13 02:46:00.807168 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.807179 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.807189 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.807200 | orchestrator | 2026-02-13 02:46:00.807211 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 02:46:00.807221 | orchestrator | 2026-02-13 02:46:00.807232 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 02:46:00.807243 | orchestrator | Friday 13 February 2026 02:45:55 +0000 (0:00:01.309) 0:00:34.811 ******* 2026-02-13 02:46:00.807253 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:00.807264 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:00.807274 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:00.807286 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:00.807296 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:00.807307 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:00.807317 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:00.807328 | orchestrator | 2026-02-13 02:46:00.807339 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:46:00.807350 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:46:00.807362 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:46:00.807374 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:46:00.807385 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:46:00.807395 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:46:00.807406 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:46:00.807464 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:46:00.807476 | orchestrator | 2026-02-13 02:46:00.807486 | orchestrator | 2026-02-13 02:46:00.807497 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:46:00.807508 | orchestrator | Friday 13 February 2026 02:46:00 +0000 (0:00:04.898) 0:00:39.710 ******* 2026-02-13 02:46:00.807519 | orchestrator | =============================================================================== 2026-02-13 02:46:00.807530 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.74s 2026-02-13 02:46:00.807541 | orchestrator | Install required packages (Debian) -------------------------------------- 7.50s 2026-02-13 02:46:00.807552 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.90s 2026-02-13 02:46:00.807562 | orchestrator | Copy fact files --------------------------------------------------------- 3.47s 2026-02-13 02:46:00.807573 | orchestrator | Create custom facts directory ------------------------------------------- 1.35s 2026-02-13 02:46:00.807599 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-02-13 02:46:00.807623 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2026-02-13 02:46:01.088137 | orchestrator | Copy fact file ---------------------------------------------------------- 1.11s 2026-02-13 02:46:01.088227 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2026-02-13 02:46:01.088239 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2026-02-13 02:46:01.088249 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-02-13 02:46:01.088258 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2026-02-13 02:46:01.088294 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-02-13 02:46:01.088309 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.17s 2026-02-13 02:46:01.088323 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2026-02-13 02:46:01.088339 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-02-13 02:46:01.088354 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.10s 2026-02-13 02:46:01.088386 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.09s 2026-02-13 02:46:01.391837 | orchestrator | + osism apply bootstrap 2026-02-13 02:46:13.549783 | orchestrator | 2026-02-13 02:46:13 | INFO  | Task f6ed79bb-b43e-4971-9206-c27b0403a332 (bootstrap) was prepared for execution. 2026-02-13 02:46:13.549880 | orchestrator | 2026-02-13 02:46:13 | INFO  | It takes a moment until task f6ed79bb-b43e-4971-9206-c27b0403a332 (bootstrap) has been started and output is visible here. 2026-02-13 02:46:29.268842 | orchestrator | 2026-02-13 02:46:29.268990 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-13 02:46:29.269021 | orchestrator | 2026-02-13 02:46:29.269042 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-13 02:46:29.269061 | orchestrator | Friday 13 February 2026 02:46:17 +0000 (0:00:00.150) 0:00:00.150 ******* 2026-02-13 02:46:29.269080 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:29.269100 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:29.269119 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:29.269137 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:29.269155 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:29.269172 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:29.269190 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:29.269209 | orchestrator | 2026-02-13 02:46:29.269229 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 02:46:29.269248 | orchestrator | 2026-02-13 02:46:29.269266 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 02:46:29.269279 | orchestrator | Friday 13 February 2026 02:46:17 +0000 (0:00:00.241) 0:00:00.391 ******* 2026-02-13 02:46:29.269289 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:29.269300 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:29.269313 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:29.269326 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:29.269338 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:29.269351 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:29.269363 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:29.269375 | orchestrator | 2026-02-13 02:46:29.269388 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-13 02:46:29.269401 | orchestrator | 2026-02-13 02:46:29.269413 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 02:46:29.269426 | orchestrator | Friday 13 February 2026 02:46:21 +0000 (0:00:03.644) 0:00:04.036 ******* 2026-02-13 02:46:29.269446 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-13 02:46:29.269464 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-13 02:46:29.269507 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-13 02:46:29.269527 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-13 02:46:29.269538 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-13 02:46:29.269549 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-13 02:46:29.269560 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-13 02:46:29.269571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 02:46:29.269582 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-13 02:46:29.269593 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-13 02:46:29.269632 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-13 02:46:29.269643 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-13 02:46:29.269654 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-13 02:46:29.269665 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 02:46:29.269675 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-13 02:46:29.269686 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:29.269698 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-13 02:46:29.269709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 02:46:29.269719 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-13 02:46:29.269730 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-13 02:46:29.269741 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-13 02:46:29.269751 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-13 02:46:29.269762 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 02:46:29.269772 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-13 02:46:29.269783 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-13 02:46:29.269793 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:29.269804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 02:46:29.269815 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-13 02:46:29.269825 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-13 02:46:29.269836 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 02:46:29.269847 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 02:46:29.269857 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-13 02:46:29.269868 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-13 02:46:29.269878 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-13 02:46:29.269889 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 02:46:29.269899 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-13 02:46:29.269910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 02:46:29.269920 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-13 02:46:29.269931 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 02:46:29.269942 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:29.269953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-13 02:46:29.269963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-13 02:46:29.269977 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 02:46:29.269995 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-13 02:46:29.270006 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:29.270083 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-13 02:46:29.270115 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 02:46:29.270127 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-13 02:46:29.270157 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:29.270168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-13 02:46:29.270179 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-13 02:46:29.270190 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-13 02:46:29.270201 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-13 02:46:29.270212 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:29.270223 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-13 02:46:29.270244 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:29.270255 | orchestrator | 2026-02-13 02:46:29.270265 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-13 02:46:29.270276 | orchestrator | 2026-02-13 02:46:29.270287 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-13 02:46:29.270298 | orchestrator | Friday 13 February 2026 02:46:22 +0000 (0:00:00.428) 0:00:04.464 ******* 2026-02-13 02:46:29.270308 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:29.270319 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:29.270330 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:29.270340 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:29.270351 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:29.270363 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:29.270382 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:29.270394 | orchestrator | 2026-02-13 02:46:29.270405 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-13 02:46:29.270415 | orchestrator | Friday 13 February 2026 02:46:23 +0000 (0:00:01.228) 0:00:05.693 ******* 2026-02-13 02:46:29.270426 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:29.270437 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:29.270447 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:29.270458 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:29.270468 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:29.270533 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:29.270546 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:29.270557 | orchestrator | 2026-02-13 02:46:29.270567 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-13 02:46:29.270578 | orchestrator | Friday 13 February 2026 02:46:24 +0000 (0:00:01.217) 0:00:06.911 ******* 2026-02-13 02:46:29.270590 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:29.270604 | orchestrator | 2026-02-13 02:46:29.270615 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-13 02:46:29.270625 | orchestrator | Friday 13 February 2026 02:46:24 +0000 (0:00:00.275) 0:00:07.187 ******* 2026-02-13 02:46:29.270636 | orchestrator | changed: [testbed-manager] 2026-02-13 02:46:29.270647 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:29.270658 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:29.270668 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:29.270679 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:29.270689 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:29.270700 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:29.270710 | orchestrator | 2026-02-13 02:46:29.270721 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-13 02:46:29.270732 | orchestrator | Friday 13 February 2026 02:46:26 +0000 (0:00:02.113) 0:00:09.300 ******* 2026-02-13 02:46:29.270743 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:29.270755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:29.270767 | orchestrator | 2026-02-13 02:46:29.270778 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-13 02:46:29.270789 | orchestrator | Friday 13 February 2026 02:46:27 +0000 (0:00:00.247) 0:00:09.548 ******* 2026-02-13 02:46:29.270800 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:29.270810 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:29.270821 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:29.270831 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:29.270842 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:29.270852 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:29.270863 | orchestrator | 2026-02-13 02:46:29.270874 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-13 02:46:29.270892 | orchestrator | Friday 13 February 2026 02:46:28 +0000 (0:00:01.003) 0:00:10.551 ******* 2026-02-13 02:46:29.270903 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:29.270921 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:29.270936 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:29.270947 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:29.270957 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:29.270968 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:29.270978 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:29.270989 | orchestrator | 2026-02-13 02:46:29.270999 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-13 02:46:29.271010 | orchestrator | Friday 13 February 2026 02:46:28 +0000 (0:00:00.569) 0:00:11.121 ******* 2026-02-13 02:46:29.271020 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:29.271031 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:29.271047 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:29.271058 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:29.271068 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:29.271079 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:29.271089 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:29.271100 | orchestrator | 2026-02-13 02:46:29.271111 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-13 02:46:29.271123 | orchestrator | Friday 13 February 2026 02:46:29 +0000 (0:00:00.455) 0:00:11.577 ******* 2026-02-13 02:46:29.271133 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:29.271144 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:29.271163 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:41.348548 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:41.348660 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:41.348674 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:41.348686 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:41.348697 | orchestrator | 2026-02-13 02:46:41.348710 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-13 02:46:41.348722 | orchestrator | Friday 13 February 2026 02:46:29 +0000 (0:00:00.251) 0:00:11.828 ******* 2026-02-13 02:46:41.348735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:41.348764 | orchestrator | 2026-02-13 02:46:41.348775 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-13 02:46:41.348787 | orchestrator | Friday 13 February 2026 02:46:29 +0000 (0:00:00.308) 0:00:12.137 ******* 2026-02-13 02:46:41.348799 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:41.348810 | orchestrator | 2026-02-13 02:46:41.348821 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-13 02:46:41.348832 | orchestrator | Friday 13 February 2026 02:46:29 +0000 (0:00:00.309) 0:00:12.447 ******* 2026-02-13 02:46:41.348843 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.348854 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.348865 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.348876 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.348887 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.348898 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.348908 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.348919 | orchestrator | 2026-02-13 02:46:41.348930 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-13 02:46:41.348941 | orchestrator | Friday 13 February 2026 02:46:31 +0000 (0:00:01.346) 0:00:13.793 ******* 2026-02-13 02:46:41.348952 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:41.348986 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:41.349002 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:41.349022 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:41.349041 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:41.349060 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:41.349079 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:41.349097 | orchestrator | 2026-02-13 02:46:41.349118 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-13 02:46:41.349138 | orchestrator | Friday 13 February 2026 02:46:31 +0000 (0:00:00.290) 0:00:14.084 ******* 2026-02-13 02:46:41.349159 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.349180 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.349200 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.349214 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.349227 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.349239 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.349249 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.349260 | orchestrator | 2026-02-13 02:46:41.349271 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-13 02:46:41.349282 | orchestrator | Friday 13 February 2026 02:46:32 +0000 (0:00:00.547) 0:00:14.631 ******* 2026-02-13 02:46:41.349293 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:41.349303 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:41.349314 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:41.349325 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:41.349335 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:41.349346 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:41.349356 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:41.349368 | orchestrator | 2026-02-13 02:46:41.349379 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-13 02:46:41.349391 | orchestrator | Friday 13 February 2026 02:46:32 +0000 (0:00:00.245) 0:00:14.877 ******* 2026-02-13 02:46:41.349401 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.349412 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:41.349423 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:41.349433 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:41.349444 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:41.349455 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:41.349465 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:41.349475 | orchestrator | 2026-02-13 02:46:41.349486 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-13 02:46:41.349497 | orchestrator | Friday 13 February 2026 02:46:32 +0000 (0:00:00.552) 0:00:15.430 ******* 2026-02-13 02:46:41.349555 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.349567 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:41.349577 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:41.349588 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:41.349598 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:41.349609 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:41.349619 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:41.349630 | orchestrator | 2026-02-13 02:46:41.349641 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-13 02:46:41.349652 | orchestrator | Friday 13 February 2026 02:46:34 +0000 (0:00:01.092) 0:00:16.523 ******* 2026-02-13 02:46:41.349663 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.349674 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.349685 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.349695 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.349706 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.349717 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.349727 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.349738 | orchestrator | 2026-02-13 02:46:41.349749 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-13 02:46:41.349760 | orchestrator | Friday 13 February 2026 02:46:35 +0000 (0:00:01.082) 0:00:17.605 ******* 2026-02-13 02:46:41.349800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:41.349813 | orchestrator | 2026-02-13 02:46:41.349824 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-13 02:46:41.349847 | orchestrator | Friday 13 February 2026 02:46:35 +0000 (0:00:00.313) 0:00:17.919 ******* 2026-02-13 02:46:41.349858 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:41.349868 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:41.349879 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:46:41.349890 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:46:41.349901 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:41.349911 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:41.349921 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:46:41.349932 | orchestrator | 2026-02-13 02:46:41.349943 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-13 02:46:41.349954 | orchestrator | Friday 13 February 2026 02:46:36 +0000 (0:00:01.363) 0:00:19.282 ******* 2026-02-13 02:46:41.349964 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.349975 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.349986 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.349996 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350007 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.350078 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.350091 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.350102 | orchestrator | 2026-02-13 02:46:41.350113 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-13 02:46:41.350124 | orchestrator | Friday 13 February 2026 02:46:37 +0000 (0:00:00.223) 0:00:19.505 ******* 2026-02-13 02:46:41.350135 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350145 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350156 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350167 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350177 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.350188 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.350198 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.350209 | orchestrator | 2026-02-13 02:46:41.350220 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-13 02:46:41.350231 | orchestrator | Friday 13 February 2026 02:46:37 +0000 (0:00:00.198) 0:00:19.704 ******* 2026-02-13 02:46:41.350241 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350252 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350263 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350273 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350284 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.350294 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.350305 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.350315 | orchestrator | 2026-02-13 02:46:41.350326 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-13 02:46:41.350337 | orchestrator | Friday 13 February 2026 02:46:37 +0000 (0:00:00.221) 0:00:19.926 ******* 2026-02-13 02:46:41.350349 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:46:41.350361 | orchestrator | 2026-02-13 02:46:41.350372 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-13 02:46:41.350383 | orchestrator | Friday 13 February 2026 02:46:37 +0000 (0:00:00.273) 0:00:20.200 ******* 2026-02-13 02:46:41.350393 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350404 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350415 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350426 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.350446 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350457 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.350467 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.350478 | orchestrator | 2026-02-13 02:46:41.350489 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-13 02:46:41.350499 | orchestrator | Friday 13 February 2026 02:46:38 +0000 (0:00:00.557) 0:00:20.757 ******* 2026-02-13 02:46:41.350552 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:46:41.350563 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:46:41.350574 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:46:41.350585 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:46:41.350595 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:46:41.350606 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:46:41.350617 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:46:41.350628 | orchestrator | 2026-02-13 02:46:41.350638 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-13 02:46:41.350649 | orchestrator | Friday 13 February 2026 02:46:38 +0000 (0:00:00.220) 0:00:20.977 ******* 2026-02-13 02:46:41.350660 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350671 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350682 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350692 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:46:41.350703 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350714 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:46:41.350725 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:46:41.350735 | orchestrator | 2026-02-13 02:46:41.350746 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-13 02:46:41.350757 | orchestrator | Friday 13 February 2026 02:46:39 +0000 (0:00:01.069) 0:00:22.047 ******* 2026-02-13 02:46:41.350768 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350792 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350803 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350819 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350830 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:46:41.350840 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:46:41.350851 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:46:41.350862 | orchestrator | 2026-02-13 02:46:41.350873 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-13 02:46:41.350883 | orchestrator | Friday 13 February 2026 02:46:40 +0000 (0:00:00.571) 0:00:22.618 ******* 2026-02-13 02:46:41.350894 | orchestrator | ok: [testbed-manager] 2026-02-13 02:46:41.350905 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:46:41.350915 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:46:41.350926 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:46:41.350945 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.362095 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.362257 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.362276 | orchestrator | 2026-02-13 02:47:21.362290 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-13 02:47:21.362302 | orchestrator | Friday 13 February 2026 02:46:41 +0000 (0:00:01.174) 0:00:23.792 ******* 2026-02-13 02:47:21.362314 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.362325 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.362336 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.362347 | orchestrator | changed: [testbed-manager] 2026-02-13 02:47:21.362358 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.362369 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.362380 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.362391 | orchestrator | 2026-02-13 02:47:21.362402 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-13 02:47:21.362413 | orchestrator | Friday 13 February 2026 02:46:56 +0000 (0:00:15.295) 0:00:39.088 ******* 2026-02-13 02:47:21.362424 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.362435 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.362445 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.362482 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.362493 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.362504 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.362516 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.362528 | orchestrator | 2026-02-13 02:47:21.362541 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-13 02:47:21.362553 | orchestrator | Friday 13 February 2026 02:46:56 +0000 (0:00:00.313) 0:00:39.401 ******* 2026-02-13 02:47:21.362565 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.362578 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.362590 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.362658 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.362671 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.362683 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.362695 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.362708 | orchestrator | 2026-02-13 02:47:21.362720 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-13 02:47:21.362733 | orchestrator | Friday 13 February 2026 02:46:57 +0000 (0:00:00.229) 0:00:39.630 ******* 2026-02-13 02:47:21.362746 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.362758 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.362770 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.362782 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.362795 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.362806 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.362818 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.362831 | orchestrator | 2026-02-13 02:47:21.362844 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-13 02:47:21.362856 | orchestrator | Friday 13 February 2026 02:46:57 +0000 (0:00:00.245) 0:00:39.876 ******* 2026-02-13 02:47:21.362871 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:47:21.362886 | orchestrator | 2026-02-13 02:47:21.362897 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-13 02:47:21.362908 | orchestrator | Friday 13 February 2026 02:46:57 +0000 (0:00:00.326) 0:00:40.202 ******* 2026-02-13 02:47:21.362919 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.362929 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.362940 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.362950 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.362961 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.362971 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.362982 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.362993 | orchestrator | 2026-02-13 02:47:21.363003 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-13 02:47:21.363014 | orchestrator | Friday 13 February 2026 02:46:59 +0000 (0:00:01.632) 0:00:41.834 ******* 2026-02-13 02:47:21.363025 | orchestrator | changed: [testbed-manager] 2026-02-13 02:47:21.363036 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:47:21.363046 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:47:21.363057 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:47:21.363068 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.363078 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.363088 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.363099 | orchestrator | 2026-02-13 02:47:21.363110 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-13 02:47:21.363120 | orchestrator | Friday 13 February 2026 02:47:00 +0000 (0:00:01.098) 0:00:42.932 ******* 2026-02-13 02:47:21.363131 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.363142 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.363153 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.363163 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.363174 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.363193 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.363204 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.363214 | orchestrator | 2026-02-13 02:47:21.363225 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-13 02:47:21.363236 | orchestrator | Friday 13 February 2026 02:47:01 +0000 (0:00:00.828) 0:00:43.761 ******* 2026-02-13 02:47:21.363262 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:47:21.363275 | orchestrator | 2026-02-13 02:47:21.363286 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-13 02:47:21.363298 | orchestrator | Friday 13 February 2026 02:47:01 +0000 (0:00:00.326) 0:00:44.088 ******* 2026-02-13 02:47:21.363308 | orchestrator | changed: [testbed-manager] 2026-02-13 02:47:21.363319 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:47:21.363330 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:47:21.363340 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:47:21.363351 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.363362 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.363373 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.363383 | orchestrator | 2026-02-13 02:47:21.363434 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-13 02:47:21.363446 | orchestrator | Friday 13 February 2026 02:47:02 +0000 (0:00:01.022) 0:00:45.111 ******* 2026-02-13 02:47:21.363457 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:47:21.363468 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:47:21.363479 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:47:21.363490 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:47:21.363500 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:47:21.363511 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:47:21.363521 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:47:21.363532 | orchestrator | 2026-02-13 02:47:21.363543 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-13 02:47:21.363554 | orchestrator | Friday 13 February 2026 02:47:02 +0000 (0:00:00.253) 0:00:45.364 ******* 2026-02-13 02:47:21.363565 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:47:21.363576 | orchestrator | 2026-02-13 02:47:21.363586 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-13 02:47:21.363616 | orchestrator | Friday 13 February 2026 02:47:03 +0000 (0:00:00.348) 0:00:45.713 ******* 2026-02-13 02:47:21.363628 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.363638 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.363649 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.363660 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.363671 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.363681 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.363692 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.363703 | orchestrator | 2026-02-13 02:47:21.363713 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-13 02:47:21.363724 | orchestrator | Friday 13 February 2026 02:47:04 +0000 (0:00:01.685) 0:00:47.398 ******* 2026-02-13 02:47:21.363735 | orchestrator | changed: [testbed-manager] 2026-02-13 02:47:21.363746 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:47:21.363756 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:47:21.363767 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.363778 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:47:21.363788 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.363799 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.363810 | orchestrator | 2026-02-13 02:47:21.363821 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-13 02:47:21.363840 | orchestrator | Friday 13 February 2026 02:47:06 +0000 (0:00:01.108) 0:00:48.507 ******* 2026-02-13 02:47:21.363851 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:47:21.363862 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:47:21.363873 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:47:21.363884 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:47:21.363894 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:47:21.363905 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:47:21.363916 | orchestrator | changed: [testbed-manager] 2026-02-13 02:47:21.363927 | orchestrator | 2026-02-13 02:47:21.363938 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-13 02:47:21.363949 | orchestrator | Friday 13 February 2026 02:47:18 +0000 (0:00:12.178) 0:01:00.686 ******* 2026-02-13 02:47:21.363960 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.363970 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.363981 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.363992 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.364003 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.364013 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.364024 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.364035 | orchestrator | 2026-02-13 02:47:21.364046 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-13 02:47:21.364056 | orchestrator | Friday 13 February 2026 02:47:19 +0000 (0:00:01.443) 0:01:02.130 ******* 2026-02-13 02:47:21.364067 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.364078 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.364089 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.364099 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.364110 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.364121 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.364131 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.364142 | orchestrator | 2026-02-13 02:47:21.364153 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-13 02:47:21.364164 | orchestrator | Friday 13 February 2026 02:47:20 +0000 (0:00:00.916) 0:01:03.047 ******* 2026-02-13 02:47:21.364175 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.364185 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.364196 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.364207 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.364218 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.364228 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.364239 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.364250 | orchestrator | 2026-02-13 02:47:21.364261 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-13 02:47:21.364272 | orchestrator | Friday 13 February 2026 02:47:20 +0000 (0:00:00.225) 0:01:03.272 ******* 2026-02-13 02:47:21.364282 | orchestrator | ok: [testbed-manager] 2026-02-13 02:47:21.364293 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:47:21.364304 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:47:21.364314 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:47:21.364325 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:47:21.364336 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:47:21.364347 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:47:21.364357 | orchestrator | 2026-02-13 02:47:21.364374 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-13 02:47:21.364386 | orchestrator | Friday 13 February 2026 02:47:21 +0000 (0:00:00.223) 0:01:03.496 ******* 2026-02-13 02:47:21.364397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:47:21.364409 | orchestrator | 2026-02-13 02:47:21.364428 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-13 02:49:42.346273 | orchestrator | Friday 13 February 2026 02:47:21 +0000 (0:00:00.313) 0:01:03.809 ******* 2026-02-13 02:49:42.346464 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.346483 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.346495 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.346506 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.346516 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.346527 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.346538 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.346548 | orchestrator | 2026-02-13 02:49:42.346560 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-13 02:49:42.346571 | orchestrator | Friday 13 February 2026 02:47:23 +0000 (0:00:01.862) 0:01:05.671 ******* 2026-02-13 02:49:42.346582 | orchestrator | changed: [testbed-manager] 2026-02-13 02:49:42.346594 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:49:42.346605 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:49:42.346616 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:49:42.346626 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:49:42.346637 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:49:42.346648 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:49:42.346658 | orchestrator | 2026-02-13 02:49:42.346670 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-13 02:49:42.346681 | orchestrator | Friday 13 February 2026 02:47:23 +0000 (0:00:00.563) 0:01:06.234 ******* 2026-02-13 02:49:42.346692 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.346702 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.346713 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.346724 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.346734 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.346744 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.346755 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.346765 | orchestrator | 2026-02-13 02:49:42.346776 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-13 02:49:42.346788 | orchestrator | Friday 13 February 2026 02:47:24 +0000 (0:00:00.238) 0:01:06.473 ******* 2026-02-13 02:49:42.346799 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.346809 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.346820 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.346831 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.346841 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.346852 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.346862 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.346872 | orchestrator | 2026-02-13 02:49:42.346883 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-13 02:49:42.346894 | orchestrator | Friday 13 February 2026 02:47:25 +0000 (0:00:01.194) 0:01:07.667 ******* 2026-02-13 02:49:42.346905 | orchestrator | changed: [testbed-manager] 2026-02-13 02:49:42.346955 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:49:42.346967 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:49:42.346978 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:49:42.346989 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:49:42.347000 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:49:42.347011 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:49:42.347022 | orchestrator | 2026-02-13 02:49:42.347033 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-13 02:49:42.347048 | orchestrator | Friday 13 February 2026 02:47:26 +0000 (0:00:01.737) 0:01:09.405 ******* 2026-02-13 02:49:42.347059 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.347070 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.347080 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.347091 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.347101 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.347112 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.347122 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.347133 | orchestrator | 2026-02-13 02:49:42.347143 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-13 02:49:42.347154 | orchestrator | Friday 13 February 2026 02:47:29 +0000 (0:00:02.326) 0:01:11.731 ******* 2026-02-13 02:49:42.347174 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.347185 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.347195 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.347206 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.347216 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.347227 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.347237 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.347248 | orchestrator | 2026-02-13 02:49:42.347258 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-13 02:49:42.347269 | orchestrator | Friday 13 February 2026 02:48:01 +0000 (0:00:32.219) 0:01:43.950 ******* 2026-02-13 02:49:42.347280 | orchestrator | changed: [testbed-manager] 2026-02-13 02:49:42.347291 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:49:42.347301 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:49:42.347312 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:49:42.347323 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:49:42.347333 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:49:42.347344 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:49:42.347354 | orchestrator | 2026-02-13 02:49:42.347365 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-13 02:49:42.347376 | orchestrator | Friday 13 February 2026 02:49:26 +0000 (0:01:25.042) 0:03:08.992 ******* 2026-02-13 02:49:42.347387 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.347398 | orchestrator | ok: [testbed-manager] 2026-02-13 02:49:42.347408 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.347419 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.347430 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.347440 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.347450 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.347461 | orchestrator | 2026-02-13 02:49:42.347472 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-13 02:49:42.347483 | orchestrator | Friday 13 February 2026 02:49:28 +0000 (0:00:01.747) 0:03:10.740 ******* 2026-02-13 02:49:42.347493 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:49:42.347504 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:49:42.347514 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:49:42.347525 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:49:42.347535 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:49:42.347546 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:49:42.347556 | orchestrator | changed: [testbed-manager] 2026-02-13 02:49:42.347567 | orchestrator | 2026-02-13 02:49:42.347578 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-13 02:49:42.347589 | orchestrator | Friday 13 February 2026 02:49:41 +0000 (0:00:12.758) 0:03:23.499 ******* 2026-02-13 02:49:42.347634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-13 02:49:42.347663 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-13 02:49:42.347678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-13 02:49:42.347699 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-13 02:49:42.347710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-13 02:49:42.347721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-13 02:49:42.347732 | orchestrator | 2026-02-13 02:49:42.347743 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-13 02:49:42.347754 | orchestrator | Friday 13 February 2026 02:49:41 +0000 (0:00:00.425) 0:03:23.924 ******* 2026-02-13 02:49:42.347765 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-13 02:49:42.347776 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-13 02:49:42.347786 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:49:42.347797 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-13 02:49:42.347808 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:49:42.347819 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-13 02:49:42.347830 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:49:42.347840 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:49:42.347851 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 02:49:42.347862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 02:49:42.347872 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 02:49:42.347883 | orchestrator | 2026-02-13 02:49:42.347893 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-13 02:49:42.347904 | orchestrator | Friday 13 February 2026 02:49:42 +0000 (0:00:00.796) 0:03:24.720 ******* 2026-02-13 02:49:42.347950 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-13 02:49:42.347973 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-13 02:49:42.347994 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-13 02:49:42.348014 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-13 02:49:42.348033 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-13 02:49:42.348058 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-13 02:49:49.187466 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-13 02:49:49.187575 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-13 02:49:49.187591 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-13 02:49:49.187621 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-13 02:49:49.187632 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-13 02:49:49.187642 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-13 02:49:49.187652 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-13 02:49:49.187662 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-13 02:49:49.187672 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-13 02:49:49.187682 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-13 02:49:49.187693 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-13 02:49:49.187703 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-13 02:49:49.187713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-13 02:49:49.187723 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-13 02:49:49.187733 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-13 02:49:49.187744 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:49:49.187755 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-13 02:49:49.187766 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-13 02:49:49.187776 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-13 02:49:49.187786 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-13 02:49:49.187796 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-13 02:49:49.187806 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-13 02:49:49.187816 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-13 02:49:49.187826 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-13 02:49:49.187836 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-13 02:49:49.187846 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-13 02:49:49.187856 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-13 02:49:49.187866 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-13 02:49:49.187876 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:49:49.187886 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-13 02:49:49.187896 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-13 02:49:49.187906 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-13 02:49:49.187916 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-13 02:49:49.187926 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-13 02:49:49.187960 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-13 02:49:49.187970 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-13 02:49:49.187989 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:49:49.188001 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:49:49.188026 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-13 02:49:49.188038 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-13 02:49:49.188049 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-13 02:49:49.188060 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-13 02:49:49.188071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188099 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-13 02:49:49.188111 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-13 02:49:49.188122 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188133 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-13 02:49:49.188144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-13 02:49:49.188155 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-13 02:49:49.188166 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-13 02:49:49.188177 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188188 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-13 02:49:49.188199 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188210 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188221 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-13 02:49:49.188232 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-13 02:49:49.188243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-13 02:49:49.188253 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-13 02:49:49.188263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-13 02:49:49.188272 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-13 02:49:49.188281 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-13 02:49:49.188291 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-13 02:49:49.188301 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-13 02:49:49.188310 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-13 02:49:49.188320 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-13 02:49:49.188329 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-13 02:49:49.188339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-13 02:49:49.188349 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-13 02:49:49.188360 | orchestrator | 2026-02-13 02:49:49.188370 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-13 02:49:49.188387 | orchestrator | Friday 13 February 2026 02:49:47 +0000 (0:00:04.823) 0:03:29.544 ******* 2026-02-13 02:49:49.188397 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188407 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188416 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188426 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188436 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188445 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188455 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-13 02:49:49.188464 | orchestrator | 2026-02-13 02:49:49.188474 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-13 02:49:49.188484 | orchestrator | Friday 13 February 2026 02:49:47 +0000 (0:00:00.638) 0:03:30.183 ******* 2026-02-13 02:49:49.188493 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:49:49.188503 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:49:49.188513 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:49:49.188528 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:49:49.188537 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:49:49.188547 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:49:49.188557 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:49:49.188566 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:49:49.188576 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:49:49.188586 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:49:49.188602 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:50:02.655137 | orchestrator | 2026-02-13 02:50:02.655259 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-13 02:50:02.655277 | orchestrator | Friday 13 February 2026 02:49:49 +0000 (0:00:01.453) 0:03:31.636 ******* 2026-02-13 02:50:02.655289 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:50:02.655301 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:50:02.655314 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:50:02.655325 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:50:02.655336 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:50:02.655347 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:50:02.655358 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-13 02:50:02.655369 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:50:02.655380 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:50:02.655391 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:50:02.655401 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-13 02:50:02.655420 | orchestrator | 2026-02-13 02:50:02.655440 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-13 02:50:02.655459 | orchestrator | Friday 13 February 2026 02:49:49 +0000 (0:00:00.613) 0:03:32.249 ******* 2026-02-13 02:50:02.655509 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-13 02:50:02.655530 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:50:02.655549 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-13 02:50:02.655568 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-13 02:50:02.655588 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:50:02.655609 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:50:02.655629 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-13 02:50:02.655648 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:50:02.655659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-13 02:50:02.655670 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-13 02:50:02.655680 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-13 02:50:02.655692 | orchestrator | 2026-02-13 02:50:02.655703 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-13 02:50:02.655713 | orchestrator | Friday 13 February 2026 02:49:50 +0000 (0:00:00.598) 0:03:32.848 ******* 2026-02-13 02:50:02.655724 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:50:02.655735 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:50:02.655746 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:50:02.655757 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:50:02.655767 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:50:02.655778 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:50:02.655789 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:50:02.655799 | orchestrator | 2026-02-13 02:50:02.655810 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-13 02:50:02.655821 | orchestrator | Friday 13 February 2026 02:49:50 +0000 (0:00:00.338) 0:03:33.186 ******* 2026-02-13 02:50:02.655832 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:50:02.655843 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:50:02.655854 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:50:02.655864 | orchestrator | ok: [testbed-manager] 2026-02-13 02:50:02.655875 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:50:02.655885 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:50:02.655896 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:50:02.655906 | orchestrator | 2026-02-13 02:50:02.655917 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-13 02:50:02.655930 | orchestrator | Friday 13 February 2026 02:49:56 +0000 (0:00:05.739) 0:03:38.926 ******* 2026-02-13 02:50:02.655949 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-13 02:50:02.656012 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-13 02:50:02.656032 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:50:02.656049 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-13 02:50:02.656067 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:50:02.656087 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-13 02:50:02.656105 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:50:02.656123 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-13 02:50:02.656134 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:50:02.656170 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-13 02:50:02.656189 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:50:02.656208 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:50:02.656226 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-13 02:50:02.656246 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:50:02.656265 | orchestrator | 2026-02-13 02:50:02.656284 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-13 02:50:02.656306 | orchestrator | Friday 13 February 2026 02:49:56 +0000 (0:00:00.384) 0:03:39.310 ******* 2026-02-13 02:50:02.656317 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-13 02:50:02.656328 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-13 02:50:02.656339 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-13 02:50:02.656370 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-13 02:50:02.656382 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-13 02:50:02.656393 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-13 02:50:02.656403 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-13 02:50:02.656414 | orchestrator | 2026-02-13 02:50:02.656425 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-13 02:50:02.656436 | orchestrator | Friday 13 February 2026 02:49:58 +0000 (0:00:01.233) 0:03:40.544 ******* 2026-02-13 02:50:02.656449 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:50:02.656463 | orchestrator | 2026-02-13 02:50:02.656474 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-13 02:50:02.656485 | orchestrator | Friday 13 February 2026 02:49:58 +0000 (0:00:00.463) 0:03:41.007 ******* 2026-02-13 02:50:02.656496 | orchestrator | ok: [testbed-manager] 2026-02-13 02:50:02.656507 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:50:02.656517 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:50:02.656528 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:50:02.656539 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:50:02.656549 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:50:02.656560 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:50:02.656571 | orchestrator | 2026-02-13 02:50:02.656581 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-13 02:50:02.656592 | orchestrator | Friday 13 February 2026 02:49:59 +0000 (0:00:01.223) 0:03:42.230 ******* 2026-02-13 02:50:02.656603 | orchestrator | ok: [testbed-manager] 2026-02-13 02:50:02.656613 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:50:02.656624 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:50:02.656635 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:50:02.656645 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:50:02.656656 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:50:02.656667 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:50:02.656677 | orchestrator | 2026-02-13 02:50:02.656688 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-13 02:50:02.656699 | orchestrator | Friday 13 February 2026 02:50:00 +0000 (0:00:00.612) 0:03:42.843 ******* 2026-02-13 02:50:02.656710 | orchestrator | changed: [testbed-manager] 2026-02-13 02:50:02.656720 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:50:02.656731 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:50:02.656742 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:50:02.656752 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:50:02.656763 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:50:02.656774 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:50:02.656784 | orchestrator | 2026-02-13 02:50:02.656795 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-13 02:50:02.656806 | orchestrator | Friday 13 February 2026 02:50:00 +0000 (0:00:00.612) 0:03:43.455 ******* 2026-02-13 02:50:02.656817 | orchestrator | ok: [testbed-manager] 2026-02-13 02:50:02.656828 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:50:02.656839 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:50:02.656849 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:50:02.656860 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:50:02.656871 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:50:02.656881 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:50:02.656892 | orchestrator | 2026-02-13 02:50:02.656903 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-13 02:50:02.656913 | orchestrator | Friday 13 February 2026 02:50:01 +0000 (0:00:00.634) 0:03:44.090 ******* 2026-02-13 02:50:02.656935 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949659.624968, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:02.656950 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949688.5128493, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:02.657011 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949683.4340596, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:02.657095 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949682.2433116, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631372 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949683.4173157, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631478 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949693.524334, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631494 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1770949675.5536323, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631533 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631546 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631572 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631584 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631622 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631635 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631646 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 02:50:07.631665 | orchestrator | 2026-02-13 02:50:07.631679 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-13 02:50:07.631691 | orchestrator | Friday 13 February 2026 02:50:02 +0000 (0:00:01.010) 0:03:45.101 ******* 2026-02-13 02:50:07.631702 | orchestrator | changed: [testbed-manager] 2026-02-13 02:50:07.631714 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:50:07.631725 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:50:07.631736 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:50:07.631747 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:50:07.631758 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:50:07.631769 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:50:07.631780 | orchestrator | 2026-02-13 02:50:07.631791 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-13 02:50:07.631802 | orchestrator | Friday 13 February 2026 02:50:03 +0000 (0:00:01.077) 0:03:46.178 ******* 2026-02-13 02:50:07.631813 | orchestrator | changed: [testbed-manager] 2026-02-13 02:50:07.631824 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:50:07.631834 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:50:07.631845 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:50:07.631856 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:50:07.631866 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:50:07.631877 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:50:07.631888 | orchestrator | 2026-02-13 02:50:07.631899 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-13 02:50:07.631912 | orchestrator | Friday 13 February 2026 02:50:04 +0000 (0:00:01.143) 0:03:47.321 ******* 2026-02-13 02:50:07.631925 | orchestrator | changed: [testbed-manager] 2026-02-13 02:50:07.631938 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:50:07.631950 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:50:07.631962 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:50:07.632016 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:50:07.632029 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:50:07.632041 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:50:07.632053 | orchestrator | 2026-02-13 02:50:07.632066 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-13 02:50:07.632079 | orchestrator | Friday 13 February 2026 02:50:06 +0000 (0:00:01.236) 0:03:48.558 ******* 2026-02-13 02:50:07.632092 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:50:07.632104 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:50:07.632122 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:50:07.632135 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:50:07.632148 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:50:07.632160 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:50:07.632173 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:50:07.632185 | orchestrator | 2026-02-13 02:50:07.632198 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-13 02:50:07.632211 | orchestrator | Friday 13 February 2026 02:50:06 +0000 (0:00:00.279) 0:03:48.837 ******* 2026-02-13 02:50:07.632223 | orchestrator | ok: [testbed-manager] 2026-02-13 02:50:07.632236 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:50:07.632249 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:50:07.632261 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:50:07.632271 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:50:07.632282 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:50:07.632293 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:50:07.632303 | orchestrator | 2026-02-13 02:50:07.632314 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-13 02:50:07.632324 | orchestrator | Friday 13 February 2026 02:50:07 +0000 (0:00:00.832) 0:03:49.670 ******* 2026-02-13 02:50:07.632336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:50:07.632356 | orchestrator | 2026-02-13 02:50:07.632367 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-13 02:50:07.632385 | orchestrator | Friday 13 February 2026 02:50:07 +0000 (0:00:00.413) 0:03:50.083 ******* 2026-02-13 02:51:25.051603 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.051712 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:25.051728 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:25.051740 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:25.051751 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:25.051762 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:25.051773 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:25.051784 | orchestrator | 2026-02-13 02:51:25.051797 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-13 02:51:25.051809 | orchestrator | Friday 13 February 2026 02:50:15 +0000 (0:00:07.797) 0:03:57.881 ******* 2026-02-13 02:51:25.051820 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.051831 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.051842 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.051853 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.051863 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.051874 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.051884 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.051895 | orchestrator | 2026-02-13 02:51:25.051906 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-13 02:51:25.051917 | orchestrator | Friday 13 February 2026 02:50:16 +0000 (0:00:01.231) 0:03:59.112 ******* 2026-02-13 02:51:25.051928 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.051938 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.051949 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.051960 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.051970 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.051981 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.051991 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.052002 | orchestrator | 2026-02-13 02:51:25.052013 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-13 02:51:25.052024 | orchestrator | Friday 13 February 2026 02:50:17 +0000 (0:00:01.143) 0:04:00.256 ******* 2026-02-13 02:51:25.052035 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.052045 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.052056 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.052067 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.052077 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.052088 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.052099 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.052110 | orchestrator | 2026-02-13 02:51:25.052120 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-13 02:51:25.052161 | orchestrator | Friday 13 February 2026 02:50:18 +0000 (0:00:00.321) 0:04:00.577 ******* 2026-02-13 02:51:25.052175 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.052188 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.052200 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.052212 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.052225 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.052238 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.052251 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.052263 | orchestrator | 2026-02-13 02:51:25.052276 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-13 02:51:25.052289 | orchestrator | Friday 13 February 2026 02:50:18 +0000 (0:00:00.340) 0:04:00.918 ******* 2026-02-13 02:51:25.052302 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.052314 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.052327 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.052339 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.052351 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.052383 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.052397 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.052410 | orchestrator | 2026-02-13 02:51:25.052423 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-13 02:51:25.052436 | orchestrator | Friday 13 February 2026 02:50:18 +0000 (0:00:00.309) 0:04:01.228 ******* 2026-02-13 02:51:25.052448 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.052461 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.052473 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.052487 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.052499 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.052511 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.052522 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.052533 | orchestrator | 2026-02-13 02:51:25.052544 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-13 02:51:25.052554 | orchestrator | Friday 13 February 2026 02:50:24 +0000 (0:00:05.677) 0:04:06.905 ******* 2026-02-13 02:51:25.052571 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:51:25.052592 | orchestrator | 2026-02-13 02:51:25.052612 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-13 02:51:25.052630 | orchestrator | Friday 13 February 2026 02:50:24 +0000 (0:00:00.425) 0:04:07.330 ******* 2026-02-13 02:51:25.052649 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052668 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-13 02:51:25.052686 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052705 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-13 02:51:25.052724 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:25.052758 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052779 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-13 02:51:25.052797 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:25.052814 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052825 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:25.052836 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-13 02:51:25.052847 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:25.052858 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052869 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-13 02:51:25.052880 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052891 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-13 02:51:25.052920 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:25.052932 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:25.052943 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-13 02:51:25.052954 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-13 02:51:25.052964 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:25.052975 | orchestrator | 2026-02-13 02:51:25.052986 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-13 02:51:25.052997 | orchestrator | Friday 13 February 2026 02:50:25 +0000 (0:00:00.346) 0:04:07.677 ******* 2026-02-13 02:51:25.053008 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:51:25.053019 | orchestrator | 2026-02-13 02:51:25.053030 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-13 02:51:25.053041 | orchestrator | Friday 13 February 2026 02:50:25 +0000 (0:00:00.416) 0:04:08.093 ******* 2026-02-13 02:51:25.053061 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-13 02:51:25.053072 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-13 02:51:25.053083 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:25.053094 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-13 02:51:25.053104 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:25.053115 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-13 02:51:25.053126 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:25.053170 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-13 02:51:25.053181 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:25.053192 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-13 02:51:25.053203 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:25.053214 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:25.053225 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-13 02:51:25.053245 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:25.053264 | orchestrator | 2026-02-13 02:51:25.053282 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-13 02:51:25.053302 | orchestrator | Friday 13 February 2026 02:50:25 +0000 (0:00:00.314) 0:04:08.407 ******* 2026-02-13 02:51:25.053323 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:51:25.053343 | orchestrator | 2026-02-13 02:51:25.053357 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-13 02:51:25.053367 | orchestrator | Friday 13 February 2026 02:50:26 +0000 (0:00:00.446) 0:04:08.854 ******* 2026-02-13 02:51:25.053378 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:25.053389 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:25.053400 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:25.053411 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:25.053422 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:25.053433 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:25.053443 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:25.053454 | orchestrator | 2026-02-13 02:51:25.053465 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-13 02:51:25.053476 | orchestrator | Friday 13 February 2026 02:51:01 +0000 (0:00:35.114) 0:04:43.968 ******* 2026-02-13 02:51:25.053487 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:25.053498 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:25.053508 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:25.053519 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:25.053530 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:25.053540 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:25.053551 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:25.053562 | orchestrator | 2026-02-13 02:51:25.053572 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-13 02:51:25.053589 | orchestrator | Friday 13 February 2026 02:51:10 +0000 (0:00:08.566) 0:04:52.534 ******* 2026-02-13 02:51:25.053601 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:25.053612 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:25.053622 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:25.053633 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:25.053644 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:25.053654 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:25.053665 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:25.053676 | orchestrator | 2026-02-13 02:51:25.053687 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-13 02:51:25.053698 | orchestrator | Friday 13 February 2026 02:51:17 +0000 (0:00:07.522) 0:05:00.057 ******* 2026-02-13 02:51:25.053716 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:25.053727 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:25.053738 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:25.053749 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:25.053760 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:25.053770 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:25.053781 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:25.053792 | orchestrator | 2026-02-13 02:51:25.053803 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-13 02:51:25.053814 | orchestrator | Friday 13 February 2026 02:51:19 +0000 (0:00:01.631) 0:05:01.688 ******* 2026-02-13 02:51:25.053824 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:25.053836 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:25.053847 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:25.053857 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:25.053868 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:25.053879 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:25.053890 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:25.053901 | orchestrator | 2026-02-13 02:51:25.053920 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-13 02:51:36.250829 | orchestrator | Friday 13 February 2026 02:51:25 +0000 (0:00:05.802) 0:05:07.491 ******* 2026-02-13 02:51:36.251741 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:51:36.251784 | orchestrator | 2026-02-13 02:51:36.251797 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-13 02:51:36.251809 | orchestrator | Friday 13 February 2026 02:51:25 +0000 (0:00:00.448) 0:05:07.939 ******* 2026-02-13 02:51:36.251819 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:36.251831 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:36.251842 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:36.251853 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:36.251863 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:36.251874 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:36.251885 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:36.251895 | orchestrator | 2026-02-13 02:51:36.251906 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-13 02:51:36.251917 | orchestrator | Friday 13 February 2026 02:51:26 +0000 (0:00:00.743) 0:05:08.683 ******* 2026-02-13 02:51:36.251928 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:36.251940 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:36.251950 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:36.251961 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:36.251971 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:36.251982 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:36.251992 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:36.252003 | orchestrator | 2026-02-13 02:51:36.252013 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-13 02:51:36.252024 | orchestrator | Friday 13 February 2026 02:51:27 +0000 (0:00:01.690) 0:05:10.373 ******* 2026-02-13 02:51:36.252035 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:51:36.252046 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:51:36.252056 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:51:36.252067 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:51:36.252078 | orchestrator | changed: [testbed-manager] 2026-02-13 02:51:36.252088 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:51:36.252100 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:51:36.252111 | orchestrator | 2026-02-13 02:51:36.252121 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-13 02:51:36.252132 | orchestrator | Friday 13 February 2026 02:51:28 +0000 (0:00:00.803) 0:05:11.177 ******* 2026-02-13 02:51:36.252143 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.252176 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.252213 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.252224 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:36.252235 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:36.252245 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:36.252256 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:36.252266 | orchestrator | 2026-02-13 02:51:36.252277 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-13 02:51:36.252288 | orchestrator | Friday 13 February 2026 02:51:28 +0000 (0:00:00.284) 0:05:11.461 ******* 2026-02-13 02:51:36.252298 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.252309 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.252319 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.252330 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:36.252340 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:36.252350 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:36.252361 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:36.252371 | orchestrator | 2026-02-13 02:51:36.252382 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-13 02:51:36.252393 | orchestrator | Friday 13 February 2026 02:51:29 +0000 (0:00:00.400) 0:05:11.862 ******* 2026-02-13 02:51:36.252403 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:36.252414 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:36.252424 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:36.252434 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:36.252445 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:36.252455 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:36.252466 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:36.252476 | orchestrator | 2026-02-13 02:51:36.252487 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-13 02:51:36.252512 | orchestrator | Friday 13 February 2026 02:51:29 +0000 (0:00:00.340) 0:05:12.202 ******* 2026-02-13 02:51:36.252523 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.252534 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.252544 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.252555 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:36.252565 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:36.252575 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:36.252586 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:36.252596 | orchestrator | 2026-02-13 02:51:36.252607 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-13 02:51:36.252618 | orchestrator | Friday 13 February 2026 02:51:30 +0000 (0:00:00.276) 0:05:12.479 ******* 2026-02-13 02:51:36.252629 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:36.252640 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:36.252650 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:36.252661 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:36.252671 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:36.252681 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:36.252692 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:36.252702 | orchestrator | 2026-02-13 02:51:36.252713 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-13 02:51:36.252723 | orchestrator | Friday 13 February 2026 02:51:30 +0000 (0:00:00.323) 0:05:12.803 ******* 2026-02-13 02:51:36.252734 | orchestrator | ok: [testbed-manager] =>  2026-02-13 02:51:36.252745 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252755 | orchestrator | ok: [testbed-node-3] =>  2026-02-13 02:51:36.252766 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252776 | orchestrator | ok: [testbed-node-4] =>  2026-02-13 02:51:36.252787 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252797 | orchestrator | ok: [testbed-node-5] =>  2026-02-13 02:51:36.252808 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252840 | orchestrator | ok: [testbed-node-0] =>  2026-02-13 02:51:36.252852 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252870 | orchestrator | ok: [testbed-node-1] =>  2026-02-13 02:51:36.252881 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252891 | orchestrator | ok: [testbed-node-2] =>  2026-02-13 02:51:36.252902 | orchestrator |  docker_version: 5:27.5.1 2026-02-13 02:51:36.252957 | orchestrator | 2026-02-13 02:51:36.252971 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-13 02:51:36.252982 | orchestrator | Friday 13 February 2026 02:51:30 +0000 (0:00:00.276) 0:05:13.079 ******* 2026-02-13 02:51:36.252993 | orchestrator | ok: [testbed-manager] =>  2026-02-13 02:51:36.253004 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253014 | orchestrator | ok: [testbed-node-3] =>  2026-02-13 02:51:36.253025 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253035 | orchestrator | ok: [testbed-node-4] =>  2026-02-13 02:51:36.253046 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253056 | orchestrator | ok: [testbed-node-5] =>  2026-02-13 02:51:36.253067 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253078 | orchestrator | ok: [testbed-node-0] =>  2026-02-13 02:51:36.253088 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253099 | orchestrator | ok: [testbed-node-1] =>  2026-02-13 02:51:36.253109 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253120 | orchestrator | ok: [testbed-node-2] =>  2026-02-13 02:51:36.253130 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-13 02:51:36.253141 | orchestrator | 2026-02-13 02:51:36.253170 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-13 02:51:36.253182 | orchestrator | Friday 13 February 2026 02:51:30 +0000 (0:00:00.326) 0:05:13.405 ******* 2026-02-13 02:51:36.253193 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.253236 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.253248 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.253258 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:36.253269 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:36.253280 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:36.253290 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:36.253301 | orchestrator | 2026-02-13 02:51:36.253312 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-13 02:51:36.253322 | orchestrator | Friday 13 February 2026 02:51:31 +0000 (0:00:00.280) 0:05:13.686 ******* 2026-02-13 02:51:36.253333 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.253343 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.253354 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.253364 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:51:36.253375 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:51:36.253386 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:51:36.253396 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:51:36.253406 | orchestrator | 2026-02-13 02:51:36.253417 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-13 02:51:36.253428 | orchestrator | Friday 13 February 2026 02:51:31 +0000 (0:00:00.282) 0:05:13.969 ******* 2026-02-13 02:51:36.253441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:51:36.253454 | orchestrator | 2026-02-13 02:51:36.253465 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-13 02:51:36.253475 | orchestrator | Friday 13 February 2026 02:51:31 +0000 (0:00:00.462) 0:05:14.432 ******* 2026-02-13 02:51:36.253486 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:36.253497 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:36.253507 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:36.253518 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:36.253528 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:36.253539 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:36.253550 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:36.253568 | orchestrator | 2026-02-13 02:51:36.253579 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-13 02:51:36.253590 | orchestrator | Friday 13 February 2026 02:51:32 +0000 (0:00:00.975) 0:05:15.407 ******* 2026-02-13 02:51:36.253600 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:51:36.253611 | orchestrator | ok: [testbed-manager] 2026-02-13 02:51:36.253621 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:51:36.253632 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:51:36.253648 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:51:36.253659 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:51:36.253669 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:51:36.253680 | orchestrator | 2026-02-13 02:51:36.253691 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-13 02:51:36.253702 | orchestrator | Friday 13 February 2026 02:51:35 +0000 (0:00:02.921) 0:05:18.328 ******* 2026-02-13 02:51:36.253713 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-13 02:51:36.253724 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-13 02:51:36.253734 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-13 02:51:36.253745 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-13 02:51:36.253756 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-13 02:51:36.253766 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-13 02:51:36.253777 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:51:36.253787 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-13 02:51:36.253798 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-13 02:51:36.253808 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-13 02:51:36.253819 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:51:36.253829 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-13 02:51:36.253840 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-13 02:51:36.253850 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-13 02:51:36.253861 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:51:36.253872 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-13 02:51:36.253889 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-13 02:52:35.031232 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-13 02:52:35.031333 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:35.031341 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-13 02:52:35.031346 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-13 02:52:35.031350 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-13 02:52:35.031355 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:35.031358 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:35.031363 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-13 02:52:35.031366 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-13 02:52:35.031370 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-13 02:52:35.031374 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:35.031378 | orchestrator | 2026-02-13 02:52:35.031383 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-13 02:52:35.031388 | orchestrator | Friday 13 February 2026 02:51:36 +0000 (0:00:00.595) 0:05:18.924 ******* 2026-02-13 02:52:35.031392 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031396 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031400 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031403 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031407 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031411 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031415 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031419 | orchestrator | 2026-02-13 02:52:35.031423 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-13 02:52:35.031441 | orchestrator | Friday 13 February 2026 02:51:42 +0000 (0:00:06.434) 0:05:25.358 ******* 2026-02-13 02:52:35.031445 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031449 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031453 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031457 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031460 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031464 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031468 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031471 | orchestrator | 2026-02-13 02:52:35.031475 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-13 02:52:35.031479 | orchestrator | Friday 13 February 2026 02:51:43 +0000 (0:00:01.052) 0:05:26.410 ******* 2026-02-13 02:52:35.031482 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031486 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031490 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031493 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031497 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031501 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031504 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031508 | orchestrator | 2026-02-13 02:52:35.031512 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-13 02:52:35.031516 | orchestrator | Friday 13 February 2026 02:51:51 +0000 (0:00:07.691) 0:05:34.102 ******* 2026-02-13 02:52:35.031519 | orchestrator | changed: [testbed-manager] 2026-02-13 02:52:35.031523 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031527 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031530 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031534 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031538 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031541 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031545 | orchestrator | 2026-02-13 02:52:35.031549 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-13 02:52:35.031552 | orchestrator | Friday 13 February 2026 02:51:54 +0000 (0:00:03.350) 0:05:37.452 ******* 2026-02-13 02:52:35.031556 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031560 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031564 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031567 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031571 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031575 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031578 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031582 | orchestrator | 2026-02-13 02:52:35.031586 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-13 02:52:35.031589 | orchestrator | Friday 13 February 2026 02:51:56 +0000 (0:00:01.315) 0:05:38.768 ******* 2026-02-13 02:52:35.031593 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031597 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031601 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031605 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031608 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031612 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031616 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031620 | orchestrator | 2026-02-13 02:52:35.031624 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-13 02:52:35.031628 | orchestrator | Friday 13 February 2026 02:51:57 +0000 (0:00:01.592) 0:05:40.360 ******* 2026-02-13 02:52:35.031631 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:35.031635 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:35.031639 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:35.031642 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:35.031646 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:35.031650 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:35.031657 | orchestrator | changed: [testbed-manager] 2026-02-13 02:52:35.031661 | orchestrator | 2026-02-13 02:52:35.031665 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-13 02:52:35.031668 | orchestrator | Friday 13 February 2026 02:51:58 +0000 (0:00:00.620) 0:05:40.981 ******* 2026-02-13 02:52:35.031672 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031676 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031680 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031683 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031687 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031691 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031694 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031698 | orchestrator | 2026-02-13 02:52:35.031702 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-13 02:52:35.031715 | orchestrator | Friday 13 February 2026 02:52:07 +0000 (0:00:09.284) 0:05:50.265 ******* 2026-02-13 02:52:35.031719 | orchestrator | changed: [testbed-manager] 2026-02-13 02:52:35.031723 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031727 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031731 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031734 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031738 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031742 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031745 | orchestrator | 2026-02-13 02:52:35.031749 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-13 02:52:35.031753 | orchestrator | Friday 13 February 2026 02:52:08 +0000 (0:00:00.908) 0:05:51.174 ******* 2026-02-13 02:52:35.031757 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031760 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031764 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031768 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031772 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031775 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031779 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031783 | orchestrator | 2026-02-13 02:52:35.031787 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-13 02:52:35.031791 | orchestrator | Friday 13 February 2026 02:52:17 +0000 (0:00:08.647) 0:05:59.822 ******* 2026-02-13 02:52:35.031794 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031798 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031802 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031806 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031809 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031813 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031817 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031820 | orchestrator | 2026-02-13 02:52:35.031824 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-13 02:52:35.031828 | orchestrator | Friday 13 February 2026 02:52:28 +0000 (0:00:11.050) 0:06:10.872 ******* 2026-02-13 02:52:35.031832 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-13 02:52:35.031836 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-13 02:52:35.031839 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-13 02:52:35.031843 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-13 02:52:35.031847 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-13 02:52:35.031851 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-13 02:52:35.031854 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-13 02:52:35.031858 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-13 02:52:35.031862 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-13 02:52:35.031865 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-13 02:52:35.031901 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-13 02:52:35.031910 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-13 02:52:35.031914 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-13 02:52:35.031918 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-13 02:52:35.031921 | orchestrator | 2026-02-13 02:52:35.031925 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-13 02:52:35.031929 | orchestrator | Friday 13 February 2026 02:52:29 +0000 (0:00:01.319) 0:06:12.192 ******* 2026-02-13 02:52:35.031933 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:35.031936 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:35.031940 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:35.031944 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:35.031948 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:35.031951 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:35.031955 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:35.031959 | orchestrator | 2026-02-13 02:52:35.031963 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-13 02:52:35.031966 | orchestrator | Friday 13 February 2026 02:52:30 +0000 (0:00:00.547) 0:06:12.739 ******* 2026-02-13 02:52:35.031970 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:35.031974 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:35.031978 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:35.031981 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:35.031985 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:35.031989 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:35.031995 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:35.031999 | orchestrator | 2026-02-13 02:52:35.032003 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-13 02:52:35.032008 | orchestrator | Friday 13 February 2026 02:52:34 +0000 (0:00:03.812) 0:06:16.552 ******* 2026-02-13 02:52:35.032012 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:35.032015 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:35.032019 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:35.032023 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:35.032026 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:35.032030 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:35.032034 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:35.032037 | orchestrator | 2026-02-13 02:52:35.032042 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-13 02:52:35.032046 | orchestrator | Friday 13 February 2026 02:52:34 +0000 (0:00:00.480) 0:06:17.033 ******* 2026-02-13 02:52:35.032049 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-13 02:52:35.032054 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-13 02:52:35.032058 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:35.032061 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-13 02:52:35.032065 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-13 02:52:35.032069 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:35.032073 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-13 02:52:35.032076 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-13 02:52:35.032080 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:35.032087 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-13 02:52:55.276013 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-13 02:52:55.276096 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:55.276102 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-13 02:52:55.276107 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-13 02:52:55.276111 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:55.276115 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-13 02:52:55.276134 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-13 02:52:55.276138 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:55.276142 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-13 02:52:55.276146 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-13 02:52:55.276150 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:55.276154 | orchestrator | 2026-02-13 02:52:55.276159 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-13 02:52:55.276164 | orchestrator | Friday 13 February 2026 02:52:35 +0000 (0:00:00.712) 0:06:17.745 ******* 2026-02-13 02:52:55.276168 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:55.276171 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:55.276175 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:55.276179 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:55.276182 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:55.276186 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:55.276190 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:55.276193 | orchestrator | 2026-02-13 02:52:55.276197 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-13 02:52:55.276202 | orchestrator | Friday 13 February 2026 02:52:35 +0000 (0:00:00.530) 0:06:18.275 ******* 2026-02-13 02:52:55.276206 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:55.276209 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:55.276213 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:55.276217 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:55.276220 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:55.276224 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:55.276228 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:55.276231 | orchestrator | 2026-02-13 02:52:55.276235 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-13 02:52:55.276239 | orchestrator | Friday 13 February 2026 02:52:36 +0000 (0:00:00.521) 0:06:18.797 ******* 2026-02-13 02:52:55.276243 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:55.276246 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:52:55.276250 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:52:55.276254 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:52:55.276257 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:52:55.276261 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:52:55.276265 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:52:55.276268 | orchestrator | 2026-02-13 02:52:55.276272 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-13 02:52:55.276276 | orchestrator | Friday 13 February 2026 02:52:36 +0000 (0:00:00.517) 0:06:19.314 ******* 2026-02-13 02:52:55.276280 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276284 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276288 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276292 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276295 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276299 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276303 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276345 | orchestrator | 2026-02-13 02:52:55.276350 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-13 02:52:55.276354 | orchestrator | Friday 13 February 2026 02:52:38 +0000 (0:00:01.988) 0:06:21.303 ******* 2026-02-13 02:52:55.276359 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:52:55.276365 | orchestrator | 2026-02-13 02:52:55.276369 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-13 02:52:55.276374 | orchestrator | Friday 13 February 2026 02:52:39 +0000 (0:00:00.892) 0:06:22.196 ******* 2026-02-13 02:52:55.276378 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276389 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:55.276393 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:55.276397 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:55.276401 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:55.276405 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:55.276409 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:55.276413 | orchestrator | 2026-02-13 02:52:55.276417 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-13 02:52:55.276421 | orchestrator | Friday 13 February 2026 02:52:40 +0000 (0:00:00.848) 0:06:23.044 ******* 2026-02-13 02:52:55.276425 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276429 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:55.276433 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:55.276437 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:55.276441 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:55.276444 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:55.276448 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:55.276452 | orchestrator | 2026-02-13 02:52:55.276456 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-13 02:52:55.276460 | orchestrator | Friday 13 February 2026 02:52:41 +0000 (0:00:00.827) 0:06:23.872 ******* 2026-02-13 02:52:55.276464 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276468 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:55.276472 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:55.276476 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:55.276479 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:55.276483 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:55.276487 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:55.276491 | orchestrator | 2026-02-13 02:52:55.276495 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-13 02:52:55.276509 | orchestrator | Friday 13 February 2026 02:52:42 +0000 (0:00:01.514) 0:06:25.387 ******* 2026-02-13 02:52:55.276513 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:52:55.276517 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276521 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276525 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276529 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276533 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276537 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276541 | orchestrator | 2026-02-13 02:52:55.276545 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-13 02:52:55.276549 | orchestrator | Friday 13 February 2026 02:52:44 +0000 (0:00:01.333) 0:06:26.720 ******* 2026-02-13 02:52:55.276553 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276557 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:55.276561 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:55.276564 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:55.276568 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:55.276572 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:55.276576 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:55.276580 | orchestrator | 2026-02-13 02:52:55.276584 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-13 02:52:55.276589 | orchestrator | Friday 13 February 2026 02:52:45 +0000 (0:00:01.379) 0:06:28.100 ******* 2026-02-13 02:52:55.276593 | orchestrator | changed: [testbed-manager] 2026-02-13 02:52:55.276598 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:52:55.276602 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:52:55.276606 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:52:55.276611 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:52:55.276615 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:52:55.276619 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:52:55.276624 | orchestrator | 2026-02-13 02:52:55.276628 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-13 02:52:55.276637 | orchestrator | Friday 13 February 2026 02:52:47 +0000 (0:00:01.482) 0:06:29.582 ******* 2026-02-13 02:52:55.276641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:52:55.276646 | orchestrator | 2026-02-13 02:52:55.276650 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-13 02:52:55.276654 | orchestrator | Friday 13 February 2026 02:52:48 +0000 (0:00:01.065) 0:06:30.647 ******* 2026-02-13 02:52:55.276658 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276662 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276666 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276670 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276674 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276678 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276682 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276686 | orchestrator | 2026-02-13 02:52:55.276689 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-13 02:52:55.276693 | orchestrator | Friday 13 February 2026 02:52:49 +0000 (0:00:01.380) 0:06:32.028 ******* 2026-02-13 02:52:55.276697 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276701 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276705 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276709 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276713 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276717 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276720 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276724 | orchestrator | 2026-02-13 02:52:55.276728 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-13 02:52:55.276732 | orchestrator | Friday 13 February 2026 02:52:51 +0000 (0:00:01.982) 0:06:34.010 ******* 2026-02-13 02:52:55.276736 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276740 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276744 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276748 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276752 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276755 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276759 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276763 | orchestrator | 2026-02-13 02:52:55.276767 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-13 02:52:55.276771 | orchestrator | Friday 13 February 2026 02:52:52 +0000 (0:00:01.174) 0:06:35.185 ******* 2026-02-13 02:52:55.276785 | orchestrator | ok: [testbed-manager] 2026-02-13 02:52:55.276789 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:52:55.276793 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:52:55.276797 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:52:55.276801 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:52:55.276805 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:52:55.276808 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:52:55.276812 | orchestrator | 2026-02-13 02:52:55.276816 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-13 02:52:55.276820 | orchestrator | Friday 13 February 2026 02:52:54 +0000 (0:00:01.355) 0:06:36.540 ******* 2026-02-13 02:52:55.276824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:52:55.276828 | orchestrator | 2026-02-13 02:52:55.276832 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:52:55.276836 | orchestrator | Friday 13 February 2026 02:52:54 +0000 (0:00:00.884) 0:06:37.424 ******* 2026-02-13 02:52:55.276840 | orchestrator | 2026-02-13 02:52:55.276844 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:52:55.276848 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.050) 0:06:37.475 ******* 2026-02-13 02:52:55.276855 | orchestrator | 2026-02-13 02:52:55.276859 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:52:55.276863 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.045) 0:06:37.521 ******* 2026-02-13 02:52:55.276866 | orchestrator | 2026-02-13 02:52:55.276870 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:52:55.276877 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.037) 0:06:37.559 ******* 2026-02-13 02:53:21.357355 | orchestrator | 2026-02-13 02:53:21.357493 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:53:21.357510 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.037) 0:06:37.596 ******* 2026-02-13 02:53:21.357522 | orchestrator | 2026-02-13 02:53:21.357533 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:53:21.357544 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.045) 0:06:37.641 ******* 2026-02-13 02:53:21.357555 | orchestrator | 2026-02-13 02:53:21.357566 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-13 02:53:21.357577 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.038) 0:06:37.679 ******* 2026-02-13 02:53:21.357587 | orchestrator | 2026-02-13 02:53:21.357598 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-13 02:53:21.357609 | orchestrator | Friday 13 February 2026 02:52:55 +0000 (0:00:00.037) 0:06:37.717 ******* 2026-02-13 02:53:21.357620 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:21.357631 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:21.357642 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:21.357653 | orchestrator | 2026-02-13 02:53:21.357664 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-13 02:53:21.357674 | orchestrator | Friday 13 February 2026 02:52:56 +0000 (0:00:01.174) 0:06:38.891 ******* 2026-02-13 02:53:21.357685 | orchestrator | changed: [testbed-manager] 2026-02-13 02:53:21.357697 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:21.357708 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:21.357719 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:21.357729 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:21.357740 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:21.357751 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:21.357761 | orchestrator | 2026-02-13 02:53:21.357772 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-13 02:53:21.357783 | orchestrator | Friday 13 February 2026 02:52:57 +0000 (0:00:01.507) 0:06:40.399 ******* 2026-02-13 02:53:21.357794 | orchestrator | changed: [testbed-manager] 2026-02-13 02:53:21.357805 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:21.357815 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:21.357826 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:21.357836 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:21.357847 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:21.357857 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:21.357868 | orchestrator | 2026-02-13 02:53:21.357879 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-13 02:53:21.357889 | orchestrator | Friday 13 February 2026 02:52:59 +0000 (0:00:01.257) 0:06:41.656 ******* 2026-02-13 02:53:21.357900 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:21.357911 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:21.357921 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:21.357932 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:21.357943 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:21.357953 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:21.357964 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:21.357974 | orchestrator | 2026-02-13 02:53:21.357985 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-13 02:53:21.357996 | orchestrator | Friday 13 February 2026 02:53:01 +0000 (0:00:02.554) 0:06:44.210 ******* 2026-02-13 02:53:21.358007 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:21.358097 | orchestrator | 2026-02-13 02:53:21.358113 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-13 02:53:21.358124 | orchestrator | Friday 13 February 2026 02:53:01 +0000 (0:00:00.100) 0:06:44.311 ******* 2026-02-13 02:53:21.358135 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.358145 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:21.358156 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:21.358167 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:21.358177 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:21.358188 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:21.358198 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:21.358209 | orchestrator | 2026-02-13 02:53:21.358220 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-13 02:53:21.358231 | orchestrator | Friday 13 February 2026 02:53:02 +0000 (0:00:01.131) 0:06:45.442 ******* 2026-02-13 02:53:21.358242 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:21.358267 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:21.358278 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:21.358289 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:21.358299 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:21.358310 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:21.358320 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:21.358331 | orchestrator | 2026-02-13 02:53:21.358342 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-13 02:53:21.358352 | orchestrator | Friday 13 February 2026 02:53:03 +0000 (0:00:00.515) 0:06:45.958 ******* 2026-02-13 02:53:21.358396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:53:21.358416 | orchestrator | 2026-02-13 02:53:21.358427 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-13 02:53:21.358438 | orchestrator | Friday 13 February 2026 02:53:04 +0000 (0:00:01.070) 0:06:47.029 ******* 2026-02-13 02:53:21.358448 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.358459 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:21.358470 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:21.358480 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:21.358490 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:21.358501 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:21.358511 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:21.358523 | orchestrator | 2026-02-13 02:53:21.358533 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-13 02:53:21.358544 | orchestrator | Friday 13 February 2026 02:53:05 +0000 (0:00:00.831) 0:06:47.860 ******* 2026-02-13 02:53:21.358555 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-13 02:53:21.358587 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-13 02:53:21.358599 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-13 02:53:21.358609 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-13 02:53:21.358620 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-13 02:53:21.358630 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-13 02:53:21.358641 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-13 02:53:21.358652 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-13 02:53:21.358663 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-13 02:53:21.358673 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-13 02:53:21.358684 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-13 02:53:21.358694 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-13 02:53:21.358705 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-13 02:53:21.358725 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-13 02:53:21.358736 | orchestrator | 2026-02-13 02:53:21.358747 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-13 02:53:21.358758 | orchestrator | Friday 13 February 2026 02:53:07 +0000 (0:00:02.417) 0:06:50.278 ******* 2026-02-13 02:53:21.358768 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:21.358779 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:21.358789 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:21.358800 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:21.358810 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:21.358821 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:21.358831 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:21.358842 | orchestrator | 2026-02-13 02:53:21.358852 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-13 02:53:21.358863 | orchestrator | Friday 13 February 2026 02:53:08 +0000 (0:00:00.766) 0:06:51.045 ******* 2026-02-13 02:53:21.358877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:53:21.358890 | orchestrator | 2026-02-13 02:53:21.358900 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-13 02:53:21.358911 | orchestrator | Friday 13 February 2026 02:53:09 +0000 (0:00:00.801) 0:06:51.846 ******* 2026-02-13 02:53:21.358922 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.358933 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:21.358943 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:21.358954 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:21.358964 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:21.358975 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:21.358985 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:21.358996 | orchestrator | 2026-02-13 02:53:21.359007 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-13 02:53:21.359017 | orchestrator | Friday 13 February 2026 02:53:10 +0000 (0:00:00.837) 0:06:52.684 ******* 2026-02-13 02:53:21.359028 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.359038 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:21.359049 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:21.359059 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:21.359070 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:21.359080 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:21.359090 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:21.359101 | orchestrator | 2026-02-13 02:53:21.359112 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-13 02:53:21.359123 | orchestrator | Friday 13 February 2026 02:53:11 +0000 (0:00:01.007) 0:06:53.692 ******* 2026-02-13 02:53:21.359133 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:21.359144 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:21.359154 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:21.359165 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:21.359176 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:21.359186 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:21.359197 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:21.359208 | orchestrator | 2026-02-13 02:53:21.359218 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-13 02:53:21.359229 | orchestrator | Friday 13 February 2026 02:53:11 +0000 (0:00:00.525) 0:06:54.218 ******* 2026-02-13 02:53:21.359240 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.359251 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:21.359261 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:21.359272 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:21.359283 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:21.359293 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:21.359303 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:21.359321 | orchestrator | 2026-02-13 02:53:21.359332 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-13 02:53:21.359342 | orchestrator | Friday 13 February 2026 02:53:13 +0000 (0:00:01.470) 0:06:55.688 ******* 2026-02-13 02:53:21.359353 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:21.359405 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:21.359416 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:21.359426 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:21.359437 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:21.359447 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:21.359458 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:21.359468 | orchestrator | 2026-02-13 02:53:21.359479 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-13 02:53:21.359490 | orchestrator | Friday 13 February 2026 02:53:13 +0000 (0:00:00.522) 0:06:56.210 ******* 2026-02-13 02:53:21.359500 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:21.359511 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:21.359521 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:21.359532 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:21.359543 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:21.359553 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:21.359570 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:53.673341 | orchestrator | 2026-02-13 02:53:53.673493 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-13 02:53:53.673518 | orchestrator | Friday 13 February 2026 02:53:21 +0000 (0:00:07.591) 0:07:03.801 ******* 2026-02-13 02:53:53.673535 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.673553 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:53.673568 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:53.673583 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:53.673597 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:53.673609 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:53.673625 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:53.673640 | orchestrator | 2026-02-13 02:53:53.673654 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-13 02:53:53.673669 | orchestrator | Friday 13 February 2026 02:53:22 +0000 (0:00:01.541) 0:07:05.343 ******* 2026-02-13 02:53:53.673684 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.673698 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:53.673712 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:53.673727 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:53.673742 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:53.673756 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:53.673771 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:53.673785 | orchestrator | 2026-02-13 02:53:53.673800 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-13 02:53:53.673814 | orchestrator | Friday 13 February 2026 02:53:24 +0000 (0:00:01.765) 0:07:07.109 ******* 2026-02-13 02:53:53.673828 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.673844 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:53.673859 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:53.673874 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:53.673889 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:53.673904 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:53.673919 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:53.673934 | orchestrator | 2026-02-13 02:53:53.673950 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-13 02:53:53.673965 | orchestrator | Friday 13 February 2026 02:53:26 +0000 (0:00:01.707) 0:07:08.816 ******* 2026-02-13 02:53:53.673980 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.673995 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674011 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674092 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674110 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674154 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674165 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674175 | orchestrator | 2026-02-13 02:53:53.674185 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-13 02:53:53.674194 | orchestrator | Friday 13 February 2026 02:53:27 +0000 (0:00:00.878) 0:07:09.695 ******* 2026-02-13 02:53:53.674215 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:53.674223 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:53.674241 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:53.674248 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:53.674256 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:53.674264 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:53.674272 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:53.674279 | orchestrator | 2026-02-13 02:53:53.674287 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-13 02:53:53.674295 | orchestrator | Friday 13 February 2026 02:53:28 +0000 (0:00:01.101) 0:07:10.796 ******* 2026-02-13 02:53:53.674302 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:53.674310 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:53.674318 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:53.674326 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:53.674333 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:53.674341 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:53.674349 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:53.674356 | orchestrator | 2026-02-13 02:53:53.674364 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-13 02:53:53.674386 | orchestrator | Friday 13 February 2026 02:53:28 +0000 (0:00:00.529) 0:07:11.325 ******* 2026-02-13 02:53:53.674395 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674402 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674410 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674435 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674443 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674455 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674463 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674470 | orchestrator | 2026-02-13 02:53:53.674478 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-13 02:53:53.674486 | orchestrator | Friday 13 February 2026 02:53:29 +0000 (0:00:00.559) 0:07:11.885 ******* 2026-02-13 02:53:53.674494 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674501 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674509 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674517 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674525 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674533 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674540 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674548 | orchestrator | 2026-02-13 02:53:53.674556 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-13 02:53:53.674564 | orchestrator | Friday 13 February 2026 02:53:30 +0000 (0:00:00.758) 0:07:12.644 ******* 2026-02-13 02:53:53.674572 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674579 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674587 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674595 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674602 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674610 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674618 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674625 | orchestrator | 2026-02-13 02:53:53.674633 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-13 02:53:53.674641 | orchestrator | Friday 13 February 2026 02:53:30 +0000 (0:00:00.535) 0:07:13.179 ******* 2026-02-13 02:53:53.674649 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674657 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674664 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674672 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674686 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674694 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674701 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674709 | orchestrator | 2026-02-13 02:53:53.674735 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-13 02:53:53.674743 | orchestrator | Friday 13 February 2026 02:53:36 +0000 (0:00:05.703) 0:07:18.883 ******* 2026-02-13 02:53:53.674751 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:53:53.674759 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:53:53.674767 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:53:53.674775 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:53:53.674783 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:53:53.674790 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:53:53.674798 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:53:53.674806 | orchestrator | 2026-02-13 02:53:53.674814 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-13 02:53:53.674822 | orchestrator | Friday 13 February 2026 02:53:36 +0000 (0:00:00.539) 0:07:19.422 ******* 2026-02-13 02:53:53.674832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:53:53.674843 | orchestrator | 2026-02-13 02:53:53.674851 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-13 02:53:53.674859 | orchestrator | Friday 13 February 2026 02:53:37 +0000 (0:00:01.040) 0:07:20.463 ******* 2026-02-13 02:53:53.674867 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674875 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674883 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674890 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674898 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674906 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674914 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674921 | orchestrator | 2026-02-13 02:53:53.674929 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-13 02:53:53.674937 | orchestrator | Friday 13 February 2026 02:53:39 +0000 (0:00:01.851) 0:07:22.315 ******* 2026-02-13 02:53:53.674945 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.674953 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.674960 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.674969 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.674976 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.674984 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.674992 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.674999 | orchestrator | 2026-02-13 02:53:53.675007 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-13 02:53:53.675015 | orchestrator | Friday 13 February 2026 02:53:40 +0000 (0:00:01.114) 0:07:23.429 ******* 2026-02-13 02:53:53.675023 | orchestrator | ok: [testbed-manager] 2026-02-13 02:53:53.675031 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:53:53.675038 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:53:53.675046 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:53:53.675054 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:53:53.675061 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:53:53.675069 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:53:53.675077 | orchestrator | 2026-02-13 02:53:53.675085 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-13 02:53:53.675093 | orchestrator | Friday 13 February 2026 02:53:41 +0000 (0:00:00.845) 0:07:24.275 ******* 2026-02-13 02:53:53.675101 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675110 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675124 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675132 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675144 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675152 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675160 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-13 02:53:53.675167 | orchestrator | 2026-02-13 02:53:53.675175 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-13 02:53:53.675183 | orchestrator | Friday 13 February 2026 02:53:43 +0000 (0:00:01.801) 0:07:26.076 ******* 2026-02-13 02:53:53.675192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:53:53.675200 | orchestrator | 2026-02-13 02:53:53.675208 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-13 02:53:53.675215 | orchestrator | Friday 13 February 2026 02:53:44 +0000 (0:00:00.760) 0:07:26.837 ******* 2026-02-13 02:53:53.675223 | orchestrator | changed: [testbed-manager] 2026-02-13 02:53:53.675231 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:53:53.675239 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:53:53.675247 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:53:53.675255 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:53:53.675263 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:53:53.675270 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:53:53.675278 | orchestrator | 2026-02-13 02:53:53.675291 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-13 02:54:24.353978 | orchestrator | Friday 13 February 2026 02:53:53 +0000 (0:00:09.280) 0:07:36.117 ******* 2026-02-13 02:54:24.354127 | orchestrator | ok: [testbed-manager] 2026-02-13 02:54:24.354144 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:54:24.354156 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:54:24.354168 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:54:24.354179 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:54:24.354190 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:54:24.354201 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:54:24.354212 | orchestrator | 2026-02-13 02:54:24.354224 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-13 02:54:24.354236 | orchestrator | Friday 13 February 2026 02:53:55 +0000 (0:00:01.960) 0:07:38.078 ******* 2026-02-13 02:54:24.354247 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:54:24.354258 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:54:24.354269 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:54:24.354280 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:54:24.354291 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:54:24.354302 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:54:24.354313 | orchestrator | 2026-02-13 02:54:24.354324 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-13 02:54:24.354335 | orchestrator | Friday 13 February 2026 02:53:56 +0000 (0:00:01.282) 0:07:39.360 ******* 2026-02-13 02:54:24.354346 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.354358 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.354369 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.354381 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.354392 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.354403 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.354413 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.354445 | orchestrator | 2026-02-13 02:54:24.354457 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-13 02:54:24.354503 | orchestrator | 2026-02-13 02:54:24.354515 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-13 02:54:24.354528 | orchestrator | Friday 13 February 2026 02:53:58 +0000 (0:00:01.181) 0:07:40.542 ******* 2026-02-13 02:54:24.354540 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:54:24.354553 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:54:24.354565 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:54:24.354581 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:54:24.354603 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:54:24.354616 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:54:24.354628 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:54:24.354640 | orchestrator | 2026-02-13 02:54:24.354653 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-13 02:54:24.354665 | orchestrator | 2026-02-13 02:54:24.354677 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-13 02:54:24.354690 | orchestrator | Friday 13 February 2026 02:53:58 +0000 (0:00:00.750) 0:07:41.293 ******* 2026-02-13 02:54:24.354702 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.354714 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.354727 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.354739 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.354751 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.354763 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.354776 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.354788 | orchestrator | 2026-02-13 02:54:24.354800 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-13 02:54:24.354812 | orchestrator | Friday 13 February 2026 02:54:00 +0000 (0:00:01.440) 0:07:42.733 ******* 2026-02-13 02:54:24.354825 | orchestrator | ok: [testbed-manager] 2026-02-13 02:54:24.354837 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:54:24.354849 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:54:24.354862 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:54:24.354874 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:54:24.354885 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:54:24.354895 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:54:24.354906 | orchestrator | 2026-02-13 02:54:24.354917 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-13 02:54:24.354928 | orchestrator | Friday 13 February 2026 02:54:01 +0000 (0:00:01.472) 0:07:44.205 ******* 2026-02-13 02:54:24.354938 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:54:24.354949 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:54:24.354960 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:54:24.354970 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:54:24.354981 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:54:24.355006 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:54:24.355018 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:54:24.355028 | orchestrator | 2026-02-13 02:54:24.355039 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-13 02:54:24.355050 | orchestrator | Friday 13 February 2026 02:54:02 +0000 (0:00:00.472) 0:07:44.678 ******* 2026-02-13 02:54:24.355062 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:54:24.355075 | orchestrator | 2026-02-13 02:54:24.355086 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-13 02:54:24.355097 | orchestrator | Friday 13 February 2026 02:54:03 +0000 (0:00:00.960) 0:07:45.638 ******* 2026-02-13 02:54:24.355110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:54:24.355131 | orchestrator | 2026-02-13 02:54:24.355142 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-13 02:54:24.355153 | orchestrator | Friday 13 February 2026 02:54:03 +0000 (0:00:00.757) 0:07:46.395 ******* 2026-02-13 02:54:24.355164 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355175 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355185 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355196 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355207 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355217 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355228 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355238 | orchestrator | 2026-02-13 02:54:24.355268 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-13 02:54:24.355280 | orchestrator | Friday 13 February 2026 02:54:13 +0000 (0:00:09.080) 0:07:55.476 ******* 2026-02-13 02:54:24.355290 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355301 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355312 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355322 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355333 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355344 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355354 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355365 | orchestrator | 2026-02-13 02:54:24.355375 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-13 02:54:24.355386 | orchestrator | Friday 13 February 2026 02:54:13 +0000 (0:00:00.827) 0:07:56.303 ******* 2026-02-13 02:54:24.355397 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355407 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355419 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355430 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355440 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355451 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355461 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355499 | orchestrator | 2026-02-13 02:54:24.355510 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-13 02:54:24.355521 | orchestrator | Friday 13 February 2026 02:54:15 +0000 (0:00:01.406) 0:07:57.710 ******* 2026-02-13 02:54:24.355532 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355543 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355553 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355564 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355575 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355585 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355596 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355607 | orchestrator | 2026-02-13 02:54:24.355618 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-13 02:54:24.355628 | orchestrator | Friday 13 February 2026 02:54:17 +0000 (0:00:01.892) 0:07:59.603 ******* 2026-02-13 02:54:24.355639 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355650 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355661 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355671 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355682 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355693 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355703 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355714 | orchestrator | 2026-02-13 02:54:24.355725 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-13 02:54:24.355736 | orchestrator | Friday 13 February 2026 02:54:18 +0000 (0:00:01.280) 0:08:00.884 ******* 2026-02-13 02:54:24.355746 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.355757 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.355768 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.355778 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.355797 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.355808 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.355818 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.355829 | orchestrator | 2026-02-13 02:54:24.355839 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-13 02:54:24.355850 | orchestrator | 2026-02-13 02:54:24.355861 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-13 02:54:24.355872 | orchestrator | Friday 13 February 2026 02:54:19 +0000 (0:00:01.118) 0:08:02.002 ******* 2026-02-13 02:54:24.355883 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:54:24.355894 | orchestrator | 2026-02-13 02:54:24.355904 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-13 02:54:24.355915 | orchestrator | Friday 13 February 2026 02:54:20 +0000 (0:00:00.781) 0:08:02.784 ******* 2026-02-13 02:54:24.355926 | orchestrator | ok: [testbed-manager] 2026-02-13 02:54:24.355937 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:54:24.355947 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:54:24.355958 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:54:24.355968 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:54:24.355985 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:54:24.355996 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:54:24.356007 | orchestrator | 2026-02-13 02:54:24.356018 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-13 02:54:24.356029 | orchestrator | Friday 13 February 2026 02:54:21 +0000 (0:00:01.109) 0:08:03.894 ******* 2026-02-13 02:54:24.356039 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:24.356050 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:24.356061 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:24.356072 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:24.356082 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:24.356093 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:24.356104 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:24.356114 | orchestrator | 2026-02-13 02:54:24.356125 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-13 02:54:24.356136 | orchestrator | Friday 13 February 2026 02:54:22 +0000 (0:00:01.115) 0:08:05.009 ******* 2026-02-13 02:54:24.356147 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 02:54:24.356158 | orchestrator | 2026-02-13 02:54:24.356168 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-13 02:54:24.356179 | orchestrator | Friday 13 February 2026 02:54:23 +0000 (0:00:00.949) 0:08:05.959 ******* 2026-02-13 02:54:24.356190 | orchestrator | ok: [testbed-manager] 2026-02-13 02:54:24.356201 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:54:24.356211 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:54:24.356222 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:54:24.356232 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:54:24.356243 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:54:24.356253 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:54:24.356264 | orchestrator | 2026-02-13 02:54:24.356283 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-13 02:54:25.910805 | orchestrator | Friday 13 February 2026 02:54:24 +0000 (0:00:00.839) 0:08:06.798 ******* 2026-02-13 02:54:25.910907 | orchestrator | changed: [testbed-manager] 2026-02-13 02:54:25.910926 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:54:25.910938 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:54:25.910949 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:54:25.910960 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:54:25.910971 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:54:25.910981 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:54:25.910992 | orchestrator | 2026-02-13 02:54:25.911004 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:54:25.911044 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-13 02:54:25.911057 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-13 02:54:25.911068 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-13 02:54:25.911079 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-13 02:54:25.911090 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-13 02:54:25.911100 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-13 02:54:25.911111 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-13 02:54:25.911121 | orchestrator | 2026-02-13 02:54:25.911132 | orchestrator | 2026-02-13 02:54:25.911143 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:54:25.911154 | orchestrator | Friday 13 February 2026 02:54:25 +0000 (0:00:01.083) 0:08:07.882 ******* 2026-02-13 02:54:25.911164 | orchestrator | =============================================================================== 2026-02-13 02:54:25.911175 | orchestrator | osism.commons.packages : Install required packages --------------------- 85.04s 2026-02-13 02:54:25.911185 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 35.11s 2026-02-13 02:54:25.911196 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.22s 2026-02-13 02:54:25.911206 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.30s 2026-02-13 02:54:25.911217 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.76s 2026-02-13 02:54:25.911229 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.18s 2026-02-13 02:54:25.911239 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.05s 2026-02-13 02:54:25.911258 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.28s 2026-02-13 02:54:25.911277 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.28s 2026-02-13 02:54:25.911296 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.08s 2026-02-13 02:54:25.911317 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.65s 2026-02-13 02:54:25.911339 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.57s 2026-02-13 02:54:25.911360 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.80s 2026-02-13 02:54:25.911388 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.69s 2026-02-13 02:54:25.911402 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.59s 2026-02-13 02:54:25.911414 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.52s 2026-02-13 02:54:25.911426 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.43s 2026-02-13 02:54:25.911438 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.80s 2026-02-13 02:54:25.911450 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.74s 2026-02-13 02:54:25.911462 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.70s 2026-02-13 02:54:26.220078 | orchestrator | + osism apply fail2ban 2026-02-13 02:54:38.866578 | orchestrator | 2026-02-13 02:54:38 | INFO  | Task c3a963cb-df18-4fca-b810-c111922beed6 (fail2ban) was prepared for execution. 2026-02-13 02:54:38.866711 | orchestrator | 2026-02-13 02:54:38 | INFO  | It takes a moment until task c3a963cb-df18-4fca-b810-c111922beed6 (fail2ban) has been started and output is visible here. 2026-02-13 02:55:00.084358 | orchestrator | 2026-02-13 02:55:00.084500 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-13 02:55:00.084620 | orchestrator | 2026-02-13 02:55:00.084644 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-13 02:55:00.084663 | orchestrator | Friday 13 February 2026 02:54:43 +0000 (0:00:00.209) 0:00:00.209 ******* 2026-02-13 02:55:00.084685 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:55:00.084706 | orchestrator | 2026-02-13 02:55:00.084726 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-13 02:55:00.084743 | orchestrator | Friday 13 February 2026 02:54:44 +0000 (0:00:01.011) 0:00:01.221 ******* 2026-02-13 02:55:00.084762 | orchestrator | changed: [testbed-manager] 2026-02-13 02:55:00.084782 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:55:00.084799 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:55:00.084819 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:55:00.084837 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:55:00.084856 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:55:00.084876 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:55:00.084895 | orchestrator | 2026-02-13 02:55:00.084915 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-13 02:55:00.084936 | orchestrator | Friday 13 February 2026 02:54:55 +0000 (0:00:10.998) 0:00:12.220 ******* 2026-02-13 02:55:00.084954 | orchestrator | changed: [testbed-manager] 2026-02-13 02:55:00.084973 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:55:00.084991 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:55:00.085010 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:55:00.085029 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:55:00.085050 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:55:00.085069 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:55:00.085089 | orchestrator | 2026-02-13 02:55:00.085106 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-13 02:55:00.085119 | orchestrator | Friday 13 February 2026 02:54:56 +0000 (0:00:01.452) 0:00:13.672 ******* 2026-02-13 02:55:00.085132 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:00.085147 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:00.085167 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:00.085186 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:00.085205 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:00.085224 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:00.085244 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:00.085263 | orchestrator | 2026-02-13 02:55:00.085282 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-13 02:55:00.085302 | orchestrator | Friday 13 February 2026 02:54:57 +0000 (0:00:01.443) 0:00:15.116 ******* 2026-02-13 02:55:00.085322 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:55:00.085342 | orchestrator | changed: [testbed-manager] 2026-02-13 02:55:00.085361 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:55:00.085380 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:55:00.085398 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:55:00.085416 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:55:00.085433 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:55:00.085450 | orchestrator | 2026-02-13 02:55:00.085468 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:55:00.085488 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085584 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085608 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085627 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085648 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085660 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085672 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:55:00.085682 | orchestrator | 2026-02-13 02:55:00.085693 | orchestrator | 2026-02-13 02:55:00.085704 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:55:00.085715 | orchestrator | Friday 13 February 2026 02:54:59 +0000 (0:00:01.722) 0:00:16.838 ******* 2026-02-13 02:55:00.085726 | orchestrator | =============================================================================== 2026-02-13 02:55:00.085737 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.00s 2026-02-13 02:55:00.085747 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.72s 2026-02-13 02:55:00.085758 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.45s 2026-02-13 02:55:00.085769 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.44s 2026-02-13 02:55:00.085779 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.01s 2026-02-13 02:55:00.379361 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-13 02:55:00.379467 | orchestrator | + osism apply network 2026-02-13 02:55:12.479921 | orchestrator | 2026-02-13 02:55:12 | INFO  | Task 2330d75c-4c72-4b0d-b20f-5486739f38e3 (network) was prepared for execution. 2026-02-13 02:55:12.480016 | orchestrator | 2026-02-13 02:55:12 | INFO  | It takes a moment until task 2330d75c-4c72-4b0d-b20f-5486739f38e3 (network) has been started and output is visible here. 2026-02-13 02:55:40.063622 | orchestrator | 2026-02-13 02:55:40.063711 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-13 02:55:40.063719 | orchestrator | 2026-02-13 02:55:40.063725 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-13 02:55:40.063731 | orchestrator | Friday 13 February 2026 02:55:16 +0000 (0:00:00.184) 0:00:00.184 ******* 2026-02-13 02:55:40.063736 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.063743 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.063748 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.063754 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.063759 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.063764 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.063769 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.063774 | orchestrator | 2026-02-13 02:55:40.063779 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-13 02:55:40.063784 | orchestrator | Friday 13 February 2026 02:55:16 +0000 (0:00:00.539) 0:00:00.723 ******* 2026-02-13 02:55:40.063790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:55:40.063797 | orchestrator | 2026-02-13 02:55:40.063802 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-13 02:55:40.063807 | orchestrator | Friday 13 February 2026 02:55:18 +0000 (0:00:01.076) 0:00:01.800 ******* 2026-02-13 02:55:40.063830 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.063835 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.063840 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.063845 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.063850 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.063855 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.063860 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.063865 | orchestrator | 2026-02-13 02:55:40.063870 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-13 02:55:40.063875 | orchestrator | Friday 13 February 2026 02:55:20 +0000 (0:00:02.103) 0:00:03.904 ******* 2026-02-13 02:55:40.063880 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.063885 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.063890 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.063895 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.063900 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.063905 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.063910 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.063915 | orchestrator | 2026-02-13 02:55:40.063920 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-13 02:55:40.063925 | orchestrator | Friday 13 February 2026 02:55:21 +0000 (0:00:01.790) 0:00:05.695 ******* 2026-02-13 02:55:40.063931 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-13 02:55:40.063936 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-13 02:55:40.063941 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-13 02:55:40.063946 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-13 02:55:40.063951 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-13 02:55:40.063969 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-13 02:55:40.063975 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-13 02:55:40.063980 | orchestrator | 2026-02-13 02:55:40.063985 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-13 02:55:40.063990 | orchestrator | Friday 13 February 2026 02:55:22 +0000 (0:00:00.965) 0:00:06.660 ******* 2026-02-13 02:55:40.063995 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 02:55:40.064001 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 02:55:40.064006 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 02:55:40.064011 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 02:55:40.064016 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 02:55:40.064021 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 02:55:40.064026 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 02:55:40.064031 | orchestrator | 2026-02-13 02:55:40.064036 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-13 02:55:40.064041 | orchestrator | Friday 13 February 2026 02:55:25 +0000 (0:00:03.053) 0:00:09.714 ******* 2026-02-13 02:55:40.064046 | orchestrator | changed: [testbed-manager] 2026-02-13 02:55:40.064051 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:55:40.064056 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:55:40.064064 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:55:40.064069 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:55:40.064074 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:55:40.064079 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:55:40.064084 | orchestrator | 2026-02-13 02:55:40.064089 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-13 02:55:40.064094 | orchestrator | Friday 13 February 2026 02:55:27 +0000 (0:00:01.484) 0:00:11.198 ******* 2026-02-13 02:55:40.064099 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 02:55:40.064104 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 02:55:40.064109 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 02:55:40.064114 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 02:55:40.064119 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 02:55:40.064124 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 02:55:40.064134 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 02:55:40.064139 | orchestrator | 2026-02-13 02:55:40.064144 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-13 02:55:40.064149 | orchestrator | Friday 13 February 2026 02:55:29 +0000 (0:00:01.640) 0:00:12.839 ******* 2026-02-13 02:55:40.064154 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.064159 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.064164 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.064169 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.064174 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.064179 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.064184 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.064189 | orchestrator | 2026-02-13 02:55:40.064194 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-13 02:55:40.064209 | orchestrator | Friday 13 February 2026 02:55:30 +0000 (0:00:01.040) 0:00:13.879 ******* 2026-02-13 02:55:40.064215 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:55:40.064220 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:55:40.064225 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:55:40.064230 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:55:40.064235 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:55:40.064240 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:55:40.064244 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:55:40.064249 | orchestrator | 2026-02-13 02:55:40.064255 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-13 02:55:40.064260 | orchestrator | Friday 13 February 2026 02:55:30 +0000 (0:00:00.638) 0:00:14.518 ******* 2026-02-13 02:55:40.064265 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.064270 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.064275 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.064280 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.064285 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.064290 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.064295 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.064300 | orchestrator | 2026-02-13 02:55:40.064305 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-13 02:55:40.064310 | orchestrator | Friday 13 February 2026 02:55:33 +0000 (0:00:02.258) 0:00:16.777 ******* 2026-02-13 02:55:40.064315 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:55:40.064320 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:55:40.064324 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:55:40.064330 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:55:40.064334 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:55:40.064339 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:55:40.064345 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-13 02:55:40.064351 | orchestrator | 2026-02-13 02:55:40.064356 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-13 02:55:40.064362 | orchestrator | Friday 13 February 2026 02:55:33 +0000 (0:00:00.874) 0:00:17.652 ******* 2026-02-13 02:55:40.064367 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.064372 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:55:40.064377 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:55:40.064382 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:55:40.064387 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:55:40.064392 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:55:40.064397 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:55:40.064402 | orchestrator | 2026-02-13 02:55:40.064407 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-13 02:55:40.064412 | orchestrator | Friday 13 February 2026 02:55:35 +0000 (0:00:01.659) 0:00:19.311 ******* 2026-02-13 02:55:40.064417 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:55:40.064429 | orchestrator | 2026-02-13 02:55:40.064434 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-13 02:55:40.064439 | orchestrator | Friday 13 February 2026 02:55:36 +0000 (0:00:01.272) 0:00:20.583 ******* 2026-02-13 02:55:40.064444 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.064449 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.064454 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.064459 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.064464 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.064469 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.064474 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.064479 | orchestrator | 2026-02-13 02:55:40.064484 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-13 02:55:40.064489 | orchestrator | Friday 13 February 2026 02:55:37 +0000 (0:00:00.986) 0:00:21.569 ******* 2026-02-13 02:55:40.064494 | orchestrator | ok: [testbed-manager] 2026-02-13 02:55:40.064499 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:55:40.064504 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:55:40.064509 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:55:40.064514 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:55:40.064519 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:55:40.064524 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:55:40.064529 | orchestrator | 2026-02-13 02:55:40.064534 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-13 02:55:40.064539 | orchestrator | Friday 13 February 2026 02:55:38 +0000 (0:00:00.921) 0:00:22.490 ******* 2026-02-13 02:55:40.064547 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064553 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064558 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064563 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064568 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064573 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064578 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064617 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064622 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064627 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064632 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064637 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064642 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-13 02:55:40.064647 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-13 02:55:40.064652 | orchestrator | 2026-02-13 02:55:40.064661 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-13 02:55:56.563361 | orchestrator | Friday 13 February 2026 02:55:40 +0000 (0:00:01.303) 0:00:23.794 ******* 2026-02-13 02:55:56.563473 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:55:56.563489 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:55:56.563501 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:55:56.563513 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:55:56.563524 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:55:56.563535 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:55:56.563546 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:55:56.563558 | orchestrator | 2026-02-13 02:55:56.563569 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-13 02:55:56.563605 | orchestrator | Friday 13 February 2026 02:55:40 +0000 (0:00:00.638) 0:00:24.432 ******* 2026-02-13 02:55:56.563683 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2026-02-13 02:55:56.563698 | orchestrator | 2026-02-13 02:55:56.563709 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-13 02:55:56.563720 | orchestrator | Friday 13 February 2026 02:55:45 +0000 (0:00:04.986) 0:00:29.419 ******* 2026-02-13 02:55:56.563732 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563781 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563793 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563804 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563830 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563902 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563923 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.563945 | orchestrator | 2026-02-13 02:55:56.563956 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-13 02:55:56.563968 | orchestrator | Friday 13 February 2026 02:55:51 +0000 (0:00:05.563) 0:00:34.982 ******* 2026-02-13 02:55:56.563980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.563991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564002 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.564036 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.564058 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-13 02:55:56.564103 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.564123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.564142 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:55:56.564165 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:56:03.024796 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-13 02:56:03.024915 | orchestrator | 2026-02-13 02:56:03.024933 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-13 02:56:03.024946 | orchestrator | Friday 13 February 2026 02:55:56 +0000 (0:00:05.310) 0:00:40.293 ******* 2026-02-13 02:56:03.024960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:56:03.024972 | orchestrator | 2026-02-13 02:56:03.024991 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-13 02:56:03.025010 | orchestrator | Friday 13 February 2026 02:55:57 +0000 (0:00:01.373) 0:00:41.667 ******* 2026-02-13 02:56:03.025028 | orchestrator | ok: [testbed-manager] 2026-02-13 02:56:03.025048 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:56:03.025067 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:56:03.025084 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:56:03.025102 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:56:03.025121 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:56:03.025140 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:56:03.025160 | orchestrator | 2026-02-13 02:56:03.025176 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-13 02:56:03.025187 | orchestrator | Friday 13 February 2026 02:55:59 +0000 (0:00:01.262) 0:00:42.929 ******* 2026-02-13 02:56:03.025198 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025210 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025221 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025232 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025243 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:56:03.025255 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025265 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025276 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025287 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025298 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:56:03.025309 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025319 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025330 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025340 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025351 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:56:03.025362 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025401 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025412 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025423 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025448 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:56:03.025459 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025470 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025481 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025492 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025502 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:56:03.025513 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025524 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025534 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025545 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025555 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:56:03.025566 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-13 02:56:03.025577 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-13 02:56:03.025587 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-13 02:56:03.025598 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-13 02:56:03.025608 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:56:03.025647 | orchestrator | 2026-02-13 02:56:03.025660 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-13 02:56:03.025692 | orchestrator | Friday 13 February 2026 02:56:01 +0000 (0:00:02.071) 0:00:45.001 ******* 2026-02-13 02:56:03.025704 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:56:03.025715 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:56:03.025726 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:56:03.025737 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:56:03.025748 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:56:03.025758 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:56:03.025769 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:56:03.025780 | orchestrator | 2026-02-13 02:56:03.025791 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-13 02:56:03.025802 | orchestrator | Friday 13 February 2026 02:56:01 +0000 (0:00:00.658) 0:00:45.659 ******* 2026-02-13 02:56:03.025813 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:56:03.025824 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:56:03.025835 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:56:03.025845 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:56:03.025857 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:56:03.025868 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:56:03.025878 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:56:03.025889 | orchestrator | 2026-02-13 02:56:03.025900 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:56:03.025913 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 02:56:03.025925 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.025936 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.025957 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.025968 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.025979 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.025990 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 02:56:03.026001 | orchestrator | 2026-02-13 02:56:03.026012 | orchestrator | 2026-02-13 02:56:03.026121 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:56:03.026134 | orchestrator | Friday 13 February 2026 02:56:02 +0000 (0:00:00.751) 0:00:46.411 ******* 2026-02-13 02:56:03.026145 | orchestrator | =============================================================================== 2026-02-13 02:56:03.026162 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.56s 2026-02-13 02:56:03.026181 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.31s 2026-02-13 02:56:03.026201 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.99s 2026-02-13 02:56:03.026221 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.05s 2026-02-13 02:56:03.026239 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.26s 2026-02-13 02:56:03.026257 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.10s 2026-02-13 02:56:03.026275 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.07s 2026-02-13 02:56:03.026303 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.79s 2026-02-13 02:56:03.026320 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.66s 2026-02-13 02:56:03.026338 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.64s 2026-02-13 02:56:03.026357 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2026-02-13 02:56:03.026377 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.37s 2026-02-13 02:56:03.026396 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.30s 2026-02-13 02:56:03.026408 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.27s 2026-02-13 02:56:03.026419 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.26s 2026-02-13 02:56:03.026436 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.08s 2026-02-13 02:56:03.026454 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.04s 2026-02-13 02:56:03.026473 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.99s 2026-02-13 02:56:03.026491 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2026-02-13 02:56:03.026511 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.92s 2026-02-13 02:56:03.334318 | orchestrator | + osism apply wireguard 2026-02-13 02:56:15.330686 | orchestrator | 2026-02-13 02:56:15 | INFO  | Task d8813203-d93d-431c-bd71-0deda07e9a79 (wireguard) was prepared for execution. 2026-02-13 02:56:15.330781 | orchestrator | 2026-02-13 02:56:15 | INFO  | It takes a moment until task d8813203-d93d-431c-bd71-0deda07e9a79 (wireguard) has been started and output is visible here. 2026-02-13 02:56:35.703190 | orchestrator | 2026-02-13 02:56:35.703336 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-13 02:56:35.703355 | orchestrator | 2026-02-13 02:56:35.703397 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-13 02:56:35.703410 | orchestrator | Friday 13 February 2026 02:56:19 +0000 (0:00:00.235) 0:00:00.235 ******* 2026-02-13 02:56:35.703421 | orchestrator | ok: [testbed-manager] 2026-02-13 02:56:35.703432 | orchestrator | 2026-02-13 02:56:35.703443 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-13 02:56:35.703454 | orchestrator | Friday 13 February 2026 02:56:21 +0000 (0:00:01.522) 0:00:01.758 ******* 2026-02-13 02:56:35.703464 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703476 | orchestrator | 2026-02-13 02:56:35.703492 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-13 02:56:35.703504 | orchestrator | Friday 13 February 2026 02:56:27 +0000 (0:00:06.370) 0:00:08.128 ******* 2026-02-13 02:56:35.703514 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703525 | orchestrator | 2026-02-13 02:56:35.703588 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-13 02:56:35.703600 | orchestrator | Friday 13 February 2026 02:56:28 +0000 (0:00:00.564) 0:00:08.692 ******* 2026-02-13 02:56:35.703611 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703622 | orchestrator | 2026-02-13 02:56:35.703632 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-13 02:56:35.703643 | orchestrator | Friday 13 February 2026 02:56:28 +0000 (0:00:00.498) 0:00:09.191 ******* 2026-02-13 02:56:35.703654 | orchestrator | ok: [testbed-manager] 2026-02-13 02:56:35.703665 | orchestrator | 2026-02-13 02:56:35.703676 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-13 02:56:35.703688 | orchestrator | Friday 13 February 2026 02:56:29 +0000 (0:00:00.716) 0:00:09.908 ******* 2026-02-13 02:56:35.703700 | orchestrator | ok: [testbed-manager] 2026-02-13 02:56:35.703712 | orchestrator | 2026-02-13 02:56:35.703726 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-13 02:56:35.703738 | orchestrator | Friday 13 February 2026 02:56:29 +0000 (0:00:00.459) 0:00:10.367 ******* 2026-02-13 02:56:35.703750 | orchestrator | ok: [testbed-manager] 2026-02-13 02:56:35.703763 | orchestrator | 2026-02-13 02:56:35.703775 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-13 02:56:35.703787 | orchestrator | Friday 13 February 2026 02:56:30 +0000 (0:00:00.448) 0:00:10.816 ******* 2026-02-13 02:56:35.703800 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703812 | orchestrator | 2026-02-13 02:56:35.703825 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-13 02:56:35.703837 | orchestrator | Friday 13 February 2026 02:56:31 +0000 (0:00:01.306) 0:00:12.122 ******* 2026-02-13 02:56:35.703849 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-13 02:56:35.703862 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703876 | orchestrator | 2026-02-13 02:56:35.703887 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-13 02:56:35.703900 | orchestrator | Friday 13 February 2026 02:56:32 +0000 (0:00:00.989) 0:00:13.112 ******* 2026-02-13 02:56:35.703912 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703924 | orchestrator | 2026-02-13 02:56:35.703937 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-13 02:56:35.703950 | orchestrator | Friday 13 February 2026 02:56:34 +0000 (0:00:01.785) 0:00:14.897 ******* 2026-02-13 02:56:35.703962 | orchestrator | changed: [testbed-manager] 2026-02-13 02:56:35.703973 | orchestrator | 2026-02-13 02:56:35.703984 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:56:35.703995 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:56:35.704007 | orchestrator | 2026-02-13 02:56:35.704018 | orchestrator | 2026-02-13 02:56:35.704029 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:56:35.704040 | orchestrator | Friday 13 February 2026 02:56:35 +0000 (0:00:00.950) 0:00:15.848 ******* 2026-02-13 02:56:35.704060 | orchestrator | =============================================================================== 2026-02-13 02:56:35.704071 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.37s 2026-02-13 02:56:35.704082 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.79s 2026-02-13 02:56:35.704092 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.52s 2026-02-13 02:56:35.704103 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.31s 2026-02-13 02:56:35.704114 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-02-13 02:56:35.704125 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-02-13 02:56:35.704135 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2026-02-13 02:56:35.704146 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-02-13 02:56:35.704157 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.50s 2026-02-13 02:56:35.704167 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-02-13 02:56:35.704178 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.45s 2026-02-13 02:56:35.977157 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-13 02:56:36.011152 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-13 02:56:36.011236 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-13 02:56:36.093936 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 181 0 --:--:-- --:--:-- --:--:-- 182 2026-02-13 02:56:36.110816 | orchestrator | + osism apply --environment custom workarounds 2026-02-13 02:56:37.967454 | orchestrator | 2026-02-13 02:56:37 | INFO  | Trying to run play workarounds in environment custom 2026-02-13 02:56:48.045624 | orchestrator | 2026-02-13 02:56:48 | INFO  | Task aeced058-1595-4053-9ec1-e806bf0f796a (workarounds) was prepared for execution. 2026-02-13 02:56:48.045723 | orchestrator | 2026-02-13 02:56:48 | INFO  | It takes a moment until task aeced058-1595-4053-9ec1-e806bf0f796a (workarounds) has been started and output is visible here. 2026-02-13 02:57:12.948535 | orchestrator | 2026-02-13 02:57:12.948653 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 02:57:12.948670 | orchestrator | 2026-02-13 02:57:12.948682 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-13 02:57:12.948694 | orchestrator | Friday 13 February 2026 02:56:52 +0000 (0:00:00.121) 0:00:00.121 ******* 2026-02-13 02:57:12.948706 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948717 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948727 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948738 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948749 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948760 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948770 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-13 02:57:12.948781 | orchestrator | 2026-02-13 02:57:12.948792 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-13 02:57:12.948803 | orchestrator | 2026-02-13 02:57:12.948813 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-13 02:57:12.948824 | orchestrator | Friday 13 February 2026 02:56:52 +0000 (0:00:00.781) 0:00:00.903 ******* 2026-02-13 02:57:12.948835 | orchestrator | ok: [testbed-manager] 2026-02-13 02:57:12.948847 | orchestrator | 2026-02-13 02:57:12.948858 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-13 02:57:12.948912 | orchestrator | 2026-02-13 02:57:12.948924 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-13 02:57:12.948946 | orchestrator | Friday 13 February 2026 02:56:54 +0000 (0:00:02.161) 0:00:03.064 ******* 2026-02-13 02:57:12.948958 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:57:12.948982 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:57:12.948993 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:57:12.949004 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:57:12.949014 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:57:12.949025 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:57:12.949036 | orchestrator | 2026-02-13 02:57:12.949046 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-13 02:57:12.949057 | orchestrator | 2026-02-13 02:57:12.949068 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-13 02:57:12.949079 | orchestrator | Friday 13 February 2026 02:56:56 +0000 (0:00:01.864) 0:00:04.928 ******* 2026-02-13 02:57:12.949090 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949102 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949113 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949123 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949134 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949161 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-13 02:57:12.949172 | orchestrator | 2026-02-13 02:57:12.949183 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-13 02:57:12.949193 | orchestrator | Friday 13 February 2026 02:56:58 +0000 (0:00:01.464) 0:00:06.393 ******* 2026-02-13 02:57:12.949204 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:57:12.949216 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:57:12.949226 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:57:12.949237 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:57:12.949247 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:57:12.949258 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:57:12.949268 | orchestrator | 2026-02-13 02:57:12.949279 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-13 02:57:12.949290 | orchestrator | Friday 13 February 2026 02:57:02 +0000 (0:00:03.931) 0:00:10.324 ******* 2026-02-13 02:57:12.949301 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:57:12.949311 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:57:12.949323 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:57:12.949334 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:57:12.949344 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:57:12.949355 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:57:12.949385 | orchestrator | 2026-02-13 02:57:12.949396 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-13 02:57:12.949407 | orchestrator | 2026-02-13 02:57:12.949418 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-13 02:57:12.949429 | orchestrator | Friday 13 February 2026 02:57:02 +0000 (0:00:00.622) 0:00:10.947 ******* 2026-02-13 02:57:12.949439 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:57:12.949450 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:57:12.949460 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:57:12.949471 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:57:12.949482 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:57:12.949492 | orchestrator | changed: [testbed-manager] 2026-02-13 02:57:12.949503 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:57:12.949513 | orchestrator | 2026-02-13 02:57:12.949524 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-13 02:57:12.949542 | orchestrator | Friday 13 February 2026 02:57:04 +0000 (0:00:01.604) 0:00:12.552 ******* 2026-02-13 02:57:12.949553 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:57:12.949563 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:57:12.949574 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:57:12.949584 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:57:12.949595 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:57:12.949605 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:57:12.949635 | orchestrator | changed: [testbed-manager] 2026-02-13 02:57:12.949646 | orchestrator | 2026-02-13 02:57:12.949657 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-13 02:57:12.949668 | orchestrator | Friday 13 February 2026 02:57:05 +0000 (0:00:01.488) 0:00:14.040 ******* 2026-02-13 02:57:12.949678 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:57:12.949689 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:57:12.949700 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:57:12.949711 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:57:12.949721 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:57:12.949732 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:57:12.949742 | orchestrator | ok: [testbed-manager] 2026-02-13 02:57:12.949753 | orchestrator | 2026-02-13 02:57:12.949765 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-13 02:57:12.949783 | orchestrator | Friday 13 February 2026 02:57:07 +0000 (0:00:01.533) 0:00:15.574 ******* 2026-02-13 02:57:12.949801 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:57:12.949812 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:57:12.949822 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:57:12.949833 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:57:12.949844 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:57:12.949854 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:57:12.949865 | orchestrator | changed: [testbed-manager] 2026-02-13 02:57:12.949875 | orchestrator | 2026-02-13 02:57:12.949886 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-13 02:57:12.949897 | orchestrator | Friday 13 February 2026 02:57:09 +0000 (0:00:01.765) 0:00:17.339 ******* 2026-02-13 02:57:12.949908 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:57:12.949918 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:57:12.949929 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:57:12.949939 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:57:12.949950 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:57:12.949960 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:57:12.949971 | orchestrator | skipping: [testbed-manager] 2026-02-13 02:57:12.949981 | orchestrator | 2026-02-13 02:57:12.949992 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-13 02:57:12.950003 | orchestrator | 2026-02-13 02:57:12.950060 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-13 02:57:12.950074 | orchestrator | Friday 13 February 2026 02:57:09 +0000 (0:00:00.614) 0:00:17.953 ******* 2026-02-13 02:57:12.950085 | orchestrator | ok: [testbed-manager] 2026-02-13 02:57:12.950096 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:57:12.950106 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:57:12.950117 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:57:12.950128 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:57:12.950138 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:57:12.950149 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:57:12.950159 | orchestrator | 2026-02-13 02:57:12.950170 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:57:12.950183 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:57:12.950194 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950214 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950231 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950242 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950253 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950263 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:12.950274 | orchestrator | 2026-02-13 02:57:12.950285 | orchestrator | 2026-02-13 02:57:12.950296 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:57:12.950307 | orchestrator | Friday 13 February 2026 02:57:12 +0000 (0:00:03.057) 0:00:21.011 ******* 2026-02-13 02:57:12.950318 | orchestrator | =============================================================================== 2026-02-13 02:57:12.950328 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.93s 2026-02-13 02:57:12.950339 | orchestrator | Install python3-docker -------------------------------------------------- 3.06s 2026-02-13 02:57:12.950350 | orchestrator | Apply netplan configuration --------------------------------------------- 2.16s 2026-02-13 02:57:12.950382 | orchestrator | Apply netplan configuration --------------------------------------------- 1.86s 2026-02-13 02:57:12.950394 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2026-02-13 02:57:12.950405 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-02-13 02:57:12.950415 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.53s 2026-02-13 02:57:12.950426 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.49s 2026-02-13 02:57:12.950437 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2026-02-13 02:57:12.950447 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.78s 2026-02-13 02:57:12.950458 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.62s 2026-02-13 02:57:12.950478 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2026-02-13 02:57:13.626664 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-13 02:57:25.715768 | orchestrator | 2026-02-13 02:57:25 | INFO  | Task 60b2a3a4-5e1f-4d7d-b608-7735275a7dcf (reboot) was prepared for execution. 2026-02-13 02:57:25.715882 | orchestrator | 2026-02-13 02:57:25 | INFO  | It takes a moment until task 60b2a3a4-5e1f-4d7d-b608-7735275a7dcf (reboot) has been started and output is visible here. 2026-02-13 02:57:35.729171 | orchestrator | 2026-02-13 02:57:35.729338 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.729366 | orchestrator | 2026-02-13 02:57:35.729385 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.729404 | orchestrator | Friday 13 February 2026 02:57:29 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-02-13 02:57:35.729423 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:57:35.729442 | orchestrator | 2026-02-13 02:57:35.729461 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.729481 | orchestrator | Friday 13 February 2026 02:57:29 +0000 (0:00:00.095) 0:00:00.322 ******* 2026-02-13 02:57:35.729500 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:57:35.729519 | orchestrator | 2026-02-13 02:57:35.729537 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.729557 | orchestrator | Friday 13 February 2026 02:57:30 +0000 (0:00:00.920) 0:00:01.243 ******* 2026-02-13 02:57:35.729609 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:57:35.729630 | orchestrator | 2026-02-13 02:57:35.729649 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.729668 | orchestrator | 2026-02-13 02:57:35.729686 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.729705 | orchestrator | Friday 13 February 2026 02:57:30 +0000 (0:00:00.111) 0:00:01.355 ******* 2026-02-13 02:57:35.729717 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:57:35.729728 | orchestrator | 2026-02-13 02:57:35.729738 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.729749 | orchestrator | Friday 13 February 2026 02:57:31 +0000 (0:00:00.101) 0:00:01.456 ******* 2026-02-13 02:57:35.729760 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:57:35.729771 | orchestrator | 2026-02-13 02:57:35.729781 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.729792 | orchestrator | Friday 13 February 2026 02:57:31 +0000 (0:00:00.651) 0:00:02.108 ******* 2026-02-13 02:57:35.729802 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:57:35.729813 | orchestrator | 2026-02-13 02:57:35.729824 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.729835 | orchestrator | 2026-02-13 02:57:35.729845 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.729856 | orchestrator | Friday 13 February 2026 02:57:31 +0000 (0:00:00.102) 0:00:02.211 ******* 2026-02-13 02:57:35.729866 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:57:35.729877 | orchestrator | 2026-02-13 02:57:35.729888 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.729899 | orchestrator | Friday 13 February 2026 02:57:32 +0000 (0:00:00.196) 0:00:02.407 ******* 2026-02-13 02:57:35.729915 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:57:35.729934 | orchestrator | 2026-02-13 02:57:35.729971 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.729992 | orchestrator | Friday 13 February 2026 02:57:32 +0000 (0:00:00.660) 0:00:03.068 ******* 2026-02-13 02:57:35.730012 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:57:35.730143 | orchestrator | 2026-02-13 02:57:35.730166 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.730186 | orchestrator | 2026-02-13 02:57:35.730198 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.730209 | orchestrator | Friday 13 February 2026 02:57:32 +0000 (0:00:00.121) 0:00:03.189 ******* 2026-02-13 02:57:35.730220 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:57:35.730230 | orchestrator | 2026-02-13 02:57:35.730241 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.730252 | orchestrator | Friday 13 February 2026 02:57:32 +0000 (0:00:00.105) 0:00:03.294 ******* 2026-02-13 02:57:35.730262 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:57:35.730318 | orchestrator | 2026-02-13 02:57:35.730338 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.730356 | orchestrator | Friday 13 February 2026 02:57:33 +0000 (0:00:00.642) 0:00:03.937 ******* 2026-02-13 02:57:35.730375 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:57:35.730393 | orchestrator | 2026-02-13 02:57:35.730412 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.730430 | orchestrator | 2026-02-13 02:57:35.730449 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.730461 | orchestrator | Friday 13 February 2026 02:57:33 +0000 (0:00:00.118) 0:00:04.056 ******* 2026-02-13 02:57:35.730471 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:57:35.730482 | orchestrator | 2026-02-13 02:57:35.730493 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.730503 | orchestrator | Friday 13 February 2026 02:57:33 +0000 (0:00:00.101) 0:00:04.157 ******* 2026-02-13 02:57:35.730526 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:57:35.730537 | orchestrator | 2026-02-13 02:57:35.730548 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.730558 | orchestrator | Friday 13 February 2026 02:57:34 +0000 (0:00:00.698) 0:00:04.856 ******* 2026-02-13 02:57:35.730569 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:57:35.730579 | orchestrator | 2026-02-13 02:57:35.730591 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-13 02:57:35.730602 | orchestrator | 2026-02-13 02:57:35.730612 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-13 02:57:35.730623 | orchestrator | Friday 13 February 2026 02:57:34 +0000 (0:00:00.123) 0:00:04.979 ******* 2026-02-13 02:57:35.730633 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:57:35.730644 | orchestrator | 2026-02-13 02:57:35.730655 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-13 02:57:35.730667 | orchestrator | Friday 13 February 2026 02:57:34 +0000 (0:00:00.101) 0:00:05.080 ******* 2026-02-13 02:57:35.730685 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:57:35.730703 | orchestrator | 2026-02-13 02:57:35.730720 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-13 02:57:35.730739 | orchestrator | Friday 13 February 2026 02:57:35 +0000 (0:00:00.684) 0:00:05.765 ******* 2026-02-13 02:57:35.730784 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:57:35.730804 | orchestrator | 2026-02-13 02:57:35.730821 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:57:35.730833 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730846 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730856 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730959 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730972 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730983 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 02:57:35.730994 | orchestrator | 2026-02-13 02:57:35.731005 | orchestrator | 2026-02-13 02:57:35.731015 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:57:35.731026 | orchestrator | Friday 13 February 2026 02:57:35 +0000 (0:00:00.041) 0:00:05.807 ******* 2026-02-13 02:57:35.731037 | orchestrator | =============================================================================== 2026-02-13 02:57:35.731054 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2026-02-13 02:57:35.731073 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.70s 2026-02-13 02:57:35.731091 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-02-13 02:57:35.993882 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-13 02:57:47.978692 | orchestrator | 2026-02-13 02:57:47 | INFO  | Task 824605ec-21e3-419a-a58a-2729f7a4ccc8 (wait-for-connection) was prepared for execution. 2026-02-13 02:57:47.978823 | orchestrator | 2026-02-13 02:57:47 | INFO  | It takes a moment until task 824605ec-21e3-419a-a58a-2729f7a4ccc8 (wait-for-connection) has been started and output is visible here. 2026-02-13 02:58:03.729880 | orchestrator | 2026-02-13 02:58:03.730082 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-13 02:58:03.730130 | orchestrator | 2026-02-13 02:58:03.730143 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-13 02:58:03.730192 | orchestrator | Friday 13 February 2026 02:57:51 +0000 (0:00:00.168) 0:00:00.168 ******* 2026-02-13 02:58:03.730205 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:58:03.730217 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:58:03.730228 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:58:03.730239 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:58:03.730250 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:58:03.730261 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:58:03.730272 | orchestrator | 2026-02-13 02:58:03.730283 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:58:03.730295 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730308 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730319 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730330 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730341 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730352 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:03.730363 | orchestrator | 2026-02-13 02:58:03.730374 | orchestrator | 2026-02-13 02:58:03.730385 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:58:03.730397 | orchestrator | Friday 13 February 2026 02:58:03 +0000 (0:00:11.518) 0:00:11.687 ******* 2026-02-13 02:58:03.730407 | orchestrator | =============================================================================== 2026-02-13 02:58:03.730419 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.52s 2026-02-13 02:58:04.003115 | orchestrator | + osism apply hddtemp 2026-02-13 02:58:15.978490 | orchestrator | 2026-02-13 02:58:15 | INFO  | Task deff063e-cb82-4926-bc4b-4b77d60b2b67 (hddtemp) was prepared for execution. 2026-02-13 02:58:15.978580 | orchestrator | 2026-02-13 02:58:15 | INFO  | It takes a moment until task deff063e-cb82-4926-bc4b-4b77d60b2b67 (hddtemp) has been started and output is visible here. 2026-02-13 02:58:43.927848 | orchestrator | 2026-02-13 02:58:43.927922 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-13 02:58:43.927928 | orchestrator | 2026-02-13 02:58:43.927932 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-13 02:58:43.927937 | orchestrator | Friday 13 February 2026 02:58:19 +0000 (0:00:00.185) 0:00:00.185 ******* 2026-02-13 02:58:43.927941 | orchestrator | ok: [testbed-manager] 2026-02-13 02:58:43.927947 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:58:43.927951 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:58:43.927955 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:58:43.927959 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:58:43.927962 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:58:43.927966 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:58:43.927970 | orchestrator | 2026-02-13 02:58:43.927974 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-13 02:58:43.927978 | orchestrator | Friday 13 February 2026 02:58:20 +0000 (0:00:00.528) 0:00:00.713 ******* 2026-02-13 02:58:43.927983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:58:43.927989 | orchestrator | 2026-02-13 02:58:43.928044 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-13 02:58:43.928049 | orchestrator | Friday 13 February 2026 02:58:21 +0000 (0:00:01.092) 0:00:01.806 ******* 2026-02-13 02:58:43.928053 | orchestrator | ok: [testbed-manager] 2026-02-13 02:58:43.928056 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:58:43.928060 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:58:43.928064 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:58:43.928068 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:58:43.928072 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:58:43.928076 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:58:43.928079 | orchestrator | 2026-02-13 02:58:43.928083 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-13 02:58:43.928087 | orchestrator | Friday 13 February 2026 02:58:23 +0000 (0:00:01.832) 0:00:03.639 ******* 2026-02-13 02:58:43.928091 | orchestrator | changed: [testbed-manager] 2026-02-13 02:58:43.928095 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:58:43.928099 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:58:43.928103 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:58:43.928107 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:58:43.928110 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:58:43.928114 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:58:43.928118 | orchestrator | 2026-02-13 02:58:43.928121 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-13 02:58:43.928125 | orchestrator | Friday 13 February 2026 02:58:24 +0000 (0:00:01.022) 0:00:04.661 ******* 2026-02-13 02:58:43.928129 | orchestrator | ok: [testbed-node-1] 2026-02-13 02:58:43.928133 | orchestrator | ok: [testbed-node-0] 2026-02-13 02:58:43.928136 | orchestrator | ok: [testbed-node-2] 2026-02-13 02:58:43.928140 | orchestrator | ok: [testbed-manager] 2026-02-13 02:58:43.928153 | orchestrator | ok: [testbed-node-3] 2026-02-13 02:58:43.928157 | orchestrator | ok: [testbed-node-5] 2026-02-13 02:58:43.928161 | orchestrator | ok: [testbed-node-4] 2026-02-13 02:58:43.928164 | orchestrator | 2026-02-13 02:58:43.928168 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-13 02:58:43.928172 | orchestrator | Friday 13 February 2026 02:58:26 +0000 (0:00:02.052) 0:00:06.714 ******* 2026-02-13 02:58:43.928176 | orchestrator | skipping: [testbed-node-0] 2026-02-13 02:58:43.928179 | orchestrator | skipping: [testbed-node-1] 2026-02-13 02:58:43.928183 | orchestrator | skipping: [testbed-node-2] 2026-02-13 02:58:43.928187 | orchestrator | changed: [testbed-manager] 2026-02-13 02:58:43.928191 | orchestrator | skipping: [testbed-node-3] 2026-02-13 02:58:43.928194 | orchestrator | skipping: [testbed-node-4] 2026-02-13 02:58:43.928198 | orchestrator | skipping: [testbed-node-5] 2026-02-13 02:58:43.928202 | orchestrator | 2026-02-13 02:58:43.928205 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-13 02:58:43.928209 | orchestrator | Friday 13 February 2026 02:58:27 +0000 (0:00:00.784) 0:00:07.499 ******* 2026-02-13 02:58:43.928213 | orchestrator | changed: [testbed-manager] 2026-02-13 02:58:43.928217 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:58:43.928220 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:58:43.928224 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:58:43.928228 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:58:43.928231 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:58:43.928235 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:58:43.928239 | orchestrator | 2026-02-13 02:58:43.928243 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-13 02:58:43.928246 | orchestrator | Friday 13 February 2026 02:58:40 +0000 (0:00:13.263) 0:00:20.762 ******* 2026-02-13 02:58:43.928250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 02:58:43.928254 | orchestrator | 2026-02-13 02:58:43.928258 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-13 02:58:43.928266 | orchestrator | Friday 13 February 2026 02:58:41 +0000 (0:00:01.228) 0:00:21.991 ******* 2026-02-13 02:58:43.928270 | orchestrator | changed: [testbed-manager] 2026-02-13 02:58:43.928274 | orchestrator | changed: [testbed-node-0] 2026-02-13 02:58:43.928277 | orchestrator | changed: [testbed-node-1] 2026-02-13 02:58:43.928281 | orchestrator | changed: [testbed-node-2] 2026-02-13 02:58:43.928285 | orchestrator | changed: [testbed-node-3] 2026-02-13 02:58:43.928289 | orchestrator | changed: [testbed-node-4] 2026-02-13 02:58:43.928292 | orchestrator | changed: [testbed-node-5] 2026-02-13 02:58:43.928296 | orchestrator | 2026-02-13 02:58:43.928299 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 02:58:43.928303 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 02:58:43.928319 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928324 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928327 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928331 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928335 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928338 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 02:58:43.928342 | orchestrator | 2026-02-13 02:58:43.928346 | orchestrator | 2026-02-13 02:58:43.928350 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 02:58:43.928353 | orchestrator | Friday 13 February 2026 02:58:43 +0000 (0:00:01.880) 0:00:23.871 ******* 2026-02-13 02:58:43.928357 | orchestrator | =============================================================================== 2026-02-13 02:58:43.928361 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.26s 2026-02-13 02:58:43.928365 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.05s 2026-02-13 02:58:43.928368 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2026-02-13 02:58:43.928372 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.83s 2026-02-13 02:58:43.928376 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.23s 2026-02-13 02:58:43.928379 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.09s 2026-02-13 02:58:43.928383 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2026-02-13 02:58:43.928387 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.78s 2026-02-13 02:58:43.928391 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.53s 2026-02-13 02:58:44.214129 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-13 02:58:44.278103 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 02:58:44.278196 | orchestrator | + sudo systemctl restart manager.service 2026-02-13 02:58:57.928596 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 02:58:57.928738 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-13 02:58:57.928767 | orchestrator | + local max_attempts=60 2026-02-13 02:58:57.928790 | orchestrator | + local name=ceph-ansible 2026-02-13 02:58:57.928811 | orchestrator | + local attempt_num=1 2026-02-13 02:58:57.928830 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:58:57.961126 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:58:57.961242 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:58:57.961256 | orchestrator | + sleep 5 2026-02-13 02:59:02.966125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:03.004366 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:03.004455 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:03.004469 | orchestrator | + sleep 5 2026-02-13 02:59:08.011210 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:08.059371 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:08.059449 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:08.059457 | orchestrator | + sleep 5 2026-02-13 02:59:13.064419 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:13.109019 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:13.109119 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:13.109133 | orchestrator | + sleep 5 2026-02-13 02:59:18.114595 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:18.154208 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:18.154427 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:18.154461 | orchestrator | + sleep 5 2026-02-13 02:59:23.159033 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:23.200464 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:23.200552 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:23.200566 | orchestrator | + sleep 5 2026-02-13 02:59:28.205574 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:28.241188 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:28.241300 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:28.241317 | orchestrator | + sleep 5 2026-02-13 02:59:33.246119 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:33.277248 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:33.277355 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:33.277371 | orchestrator | + sleep 5 2026-02-13 02:59:38.279880 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:38.310593 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:38.310690 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:38.310705 | orchestrator | + sleep 5 2026-02-13 02:59:43.313955 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:43.354292 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:43.354427 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:43.354443 | orchestrator | + sleep 5 2026-02-13 02:59:48.358651 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:48.388246 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:48.388339 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:48.388353 | orchestrator | + sleep 5 2026-02-13 02:59:53.392385 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:53.433444 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:53.433543 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:53.433559 | orchestrator | + sleep 5 2026-02-13 02:59:58.438490 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 02:59:58.484840 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-13 02:59:58.484938 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-13 02:59:58.484953 | orchestrator | + sleep 5 2026-02-13 03:00:03.488713 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-13 03:00:03.525244 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 03:00:03.525350 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-13 03:00:03.525375 | orchestrator | + local max_attempts=60 2026-02-13 03:00:03.525395 | orchestrator | + local name=kolla-ansible 2026-02-13 03:00:03.525416 | orchestrator | + local attempt_num=1 2026-02-13 03:00:03.526084 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-13 03:00:03.554152 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 03:00:03.554236 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-13 03:00:03.554250 | orchestrator | + local max_attempts=60 2026-02-13 03:00:03.554263 | orchestrator | + local name=osism-ansible 2026-02-13 03:00:03.554305 | orchestrator | + local attempt_num=1 2026-02-13 03:00:03.554661 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-13 03:00:03.592055 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 03:00:03.592172 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-13 03:00:03.592199 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-13 03:00:03.800116 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-13 03:00:03.963274 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-13 03:00:04.132094 | orchestrator | ARA in osism-ansible already disabled. 2026-02-13 03:00:04.277512 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-13 03:00:04.278478 | orchestrator | + osism apply gather-facts 2026-02-13 03:00:16.362076 | orchestrator | 2026-02-13 03:00:16 | INFO  | Task ddb2f415-c73d-4aca-b082-2baa942f86a9 (gather-facts) was prepared for execution. 2026-02-13 03:00:16.362193 | orchestrator | 2026-02-13 03:00:16 | INFO  | It takes a moment until task ddb2f415-c73d-4aca-b082-2baa942f86a9 (gather-facts) has been started and output is visible here. 2026-02-13 03:00:30.522598 | orchestrator | 2026-02-13 03:00:30.522760 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 03:00:30.522776 | orchestrator | 2026-02-13 03:00:30.522786 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 03:00:30.522797 | orchestrator | Friday 13 February 2026 03:00:20 +0000 (0:00:00.160) 0:00:00.160 ******* 2026-02-13 03:00:30.522808 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:00:30.522819 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:00:30.522829 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:00:30.522839 | orchestrator | ok: [testbed-manager] 2026-02-13 03:00:30.522849 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:00:30.522858 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:00:30.522868 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:00:30.522877 | orchestrator | 2026-02-13 03:00:30.522887 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-13 03:00:30.522897 | orchestrator | 2026-02-13 03:00:30.522907 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-13 03:00:30.522917 | orchestrator | Friday 13 February 2026 03:00:29 +0000 (0:00:09.337) 0:00:09.498 ******* 2026-02-13 03:00:30.522927 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:00:30.522937 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:00:30.522947 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:00:30.522957 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:00:30.522966 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:00:30.522976 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:00:30.522986 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:00:30.522995 | orchestrator | 2026-02-13 03:00:30.523005 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:00:30.523015 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523026 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523036 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523046 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523055 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523065 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523075 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:00:30.523111 | orchestrator | 2026-02-13 03:00:30.523121 | orchestrator | 2026-02-13 03:00:30.523131 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:00:30.523140 | orchestrator | Friday 13 February 2026 03:00:30 +0000 (0:00:00.577) 0:00:10.075 ******* 2026-02-13 03:00:30.523150 | orchestrator | =============================================================================== 2026-02-13 03:00:30.523159 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.34s 2026-02-13 03:00:30.523169 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2026-02-13 03:00:30.814783 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-13 03:00:30.833597 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-13 03:00:30.853598 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-13 03:00:30.868038 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-13 03:00:30.881174 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-13 03:00:30.901898 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-13 03:00:30.917539 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-13 03:00:30.931628 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-13 03:00:30.943459 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-13 03:00:30.962621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-13 03:00:30.976177 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-13 03:00:30.988014 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-13 03:00:31.006965 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-13 03:00:31.020979 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-13 03:00:31.038864 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-13 03:00:31.052721 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-13 03:00:31.069103 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-13 03:00:31.086513 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-13 03:00:31.107601 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-13 03:00:31.120130 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-13 03:00:31.135703 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-13 03:00:31.148186 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-13 03:00:31.159916 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-13 03:00:31.178272 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-13 03:00:31.512910 | orchestrator | ok: Runtime: 0:23:51.539601 2026-02-13 03:00:31.733147 | 2026-02-13 03:00:31.733248 | TASK [Deploy services] 2026-02-13 03:00:32.819705 | orchestrator | 2026-02-13 03:00:32.819885 | orchestrator | # DEPLOY SERVICES 2026-02-13 03:00:32.819910 | orchestrator | 2026-02-13 03:00:32.819923 | orchestrator | + set -e 2026-02-13 03:00:32.819935 | orchestrator | + echo 2026-02-13 03:00:32.819948 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-13 03:00:32.819961 | orchestrator | + echo 2026-02-13 03:00:32.820002 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 03:00:32.820022 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 03:00:32.820036 | orchestrator | ++ INTERACTIVE=false 2026-02-13 03:00:32.820047 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 03:00:32.820066 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 03:00:32.820076 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 03:00:32.820089 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 03:00:32.820099 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 03:00:32.820115 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 03:00:32.820125 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 03:00:32.820138 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 03:00:32.820148 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 03:00:32.820161 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 03:00:32.820171 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 03:00:32.820181 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 03:00:32.820192 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 03:00:32.820201 | orchestrator | ++ export ARA=false 2026-02-13 03:00:32.820211 | orchestrator | ++ ARA=false 2026-02-13 03:00:32.820221 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 03:00:32.820231 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 03:00:32.820240 | orchestrator | ++ export TEMPEST=false 2026-02-13 03:00:32.820250 | orchestrator | ++ TEMPEST=false 2026-02-13 03:00:32.820260 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 03:00:32.820269 | orchestrator | ++ IS_ZUUL=true 2026-02-13 03:00:32.820279 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:00:32.820289 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:00:32.820299 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 03:00:32.820309 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 03:00:32.820318 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 03:00:32.820328 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 03:00:32.820337 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 03:00:32.820347 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 03:00:32.820356 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 03:00:32.820372 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 03:00:32.820382 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-13 03:00:32.828485 | orchestrator | + set -e 2026-02-13 03:00:32.829830 | orchestrator | 2026-02-13 03:00:32.829856 | orchestrator | # PULL IMAGES 2026-02-13 03:00:32.829868 | orchestrator | 2026-02-13 03:00:32.829879 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 03:00:32.829893 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 03:00:32.829905 | orchestrator | ++ INTERACTIVE=false 2026-02-13 03:00:32.829915 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 03:00:32.829926 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 03:00:32.829937 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 03:00:32.829947 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 03:00:32.829958 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 03:00:32.829969 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 03:00:32.829979 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 03:00:32.829990 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 03:00:32.830001 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 03:00:32.830012 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 03:00:32.830059 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 03:00:32.830070 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 03:00:32.830081 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 03:00:32.830092 | orchestrator | ++ export ARA=false 2026-02-13 03:00:32.830103 | orchestrator | ++ ARA=false 2026-02-13 03:00:32.830117 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 03:00:32.830128 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 03:00:32.830139 | orchestrator | ++ export TEMPEST=false 2026-02-13 03:00:32.830149 | orchestrator | ++ TEMPEST=false 2026-02-13 03:00:32.830160 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 03:00:32.830201 | orchestrator | ++ IS_ZUUL=true 2026-02-13 03:00:32.830213 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:00:32.830224 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:00:32.830235 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 03:00:32.830246 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 03:00:32.830256 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 03:00:32.830267 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 03:00:32.830304 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 03:00:32.830315 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 03:00:32.830326 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 03:00:32.830337 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 03:00:32.830348 | orchestrator | + echo 2026-02-13 03:00:32.830359 | orchestrator | + echo '# PULL IMAGES' 2026-02-13 03:00:32.830370 | orchestrator | + echo 2026-02-13 03:00:32.830387 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-13 03:00:32.882395 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 03:00:32.882503 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-13 03:00:34.746837 | orchestrator | 2026-02-13 03:00:34 | INFO  | Trying to run play pull-images in environment custom 2026-02-13 03:00:44.828737 | orchestrator | 2026-02-13 03:00:44 | INFO  | Task 716a8ef8-7d3d-4eee-a14b-94ff16e46fc1 (pull-images) was prepared for execution. 2026-02-13 03:00:44.828889 | orchestrator | 2026-02-13 03:00:44 | INFO  | Task 716a8ef8-7d3d-4eee-a14b-94ff16e46fc1 is running in background. No more output. Check ARA for logs. 2026-02-13 03:00:45.040315 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-13 03:00:57.024986 | orchestrator | 2026-02-13 03:00:57 | INFO  | Task 3125bcb4-79fb-4f5a-bfe8-eb1f21d20f59 (cgit) was prepared for execution. 2026-02-13 03:00:57.025119 | orchestrator | 2026-02-13 03:00:57 | INFO  | Task 3125bcb4-79fb-4f5a-bfe8-eb1f21d20f59 is running in background. No more output. Check ARA for logs. 2026-02-13 03:01:09.751421 | orchestrator | 2026-02-13 03:01:09 | INFO  | Task 90f6e856-09b2-4a43-8215-2b0bfeb50f17 (dotfiles) was prepared for execution. 2026-02-13 03:01:09.751558 | orchestrator | 2026-02-13 03:01:09 | INFO  | Task 90f6e856-09b2-4a43-8215-2b0bfeb50f17 is running in background. No more output. Check ARA for logs. 2026-02-13 03:01:22.082738 | orchestrator | 2026-02-13 03:01:22 | INFO  | Task 7dc041ed-e660-4910-9c81-4f61eee30f89 (homer) was prepared for execution. 2026-02-13 03:01:22.082841 | orchestrator | 2026-02-13 03:01:22 | INFO  | Task 7dc041ed-e660-4910-9c81-4f61eee30f89 is running in background. No more output. Check ARA for logs. 2026-02-13 03:01:34.318898 | orchestrator | 2026-02-13 03:01:34 | INFO  | Task df00835e-d0c0-4d30-b453-7962c89dfd57 (phpmyadmin) was prepared for execution. 2026-02-13 03:01:34.319013 | orchestrator | 2026-02-13 03:01:34 | INFO  | Task df00835e-d0c0-4d30-b453-7962c89dfd57 is running in background. No more output. Check ARA for logs. 2026-02-13 03:01:46.665319 | orchestrator | 2026-02-13 03:01:46 | INFO  | Task e5f605a3-b357-4038-a2be-1b10d8450360 (sosreport) was prepared for execution. 2026-02-13 03:01:46.665451 | orchestrator | 2026-02-13 03:01:46 | INFO  | Task e5f605a3-b357-4038-a2be-1b10d8450360 is running in background. No more output. Check ARA for logs. 2026-02-13 03:01:47.033688 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-13 03:01:47.040330 | orchestrator | + set -e 2026-02-13 03:01:47.040390 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 03:01:47.040405 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 03:01:47.040418 | orchestrator | ++ INTERACTIVE=false 2026-02-13 03:01:47.040704 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 03:01:47.040726 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 03:01:47.040737 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 03:01:47.040748 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 03:01:47.040759 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 03:01:47.040770 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 03:01:47.040781 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 03:01:47.040792 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 03:01:47.040803 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 03:01:47.040814 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 03:01:47.040825 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 03:01:47.040836 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 03:01:47.040847 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 03:01:47.040859 | orchestrator | ++ export ARA=false 2026-02-13 03:01:47.040870 | orchestrator | ++ ARA=false 2026-02-13 03:01:47.040881 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 03:01:47.040920 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 03:01:47.040931 | orchestrator | ++ export TEMPEST=false 2026-02-13 03:01:47.040942 | orchestrator | ++ TEMPEST=false 2026-02-13 03:01:47.040953 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 03:01:47.040964 | orchestrator | ++ IS_ZUUL=true 2026-02-13 03:01:47.040991 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:01:47.041007 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 03:01:47.041019 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 03:01:47.041030 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 03:01:47.041041 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 03:01:47.041051 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 03:01:47.041063 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 03:01:47.041073 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 03:01:47.041084 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 03:01:47.041095 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 03:01:47.041332 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-13 03:01:47.088326 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 03:01:47.088413 | orchestrator | + osism apply frr 2026-02-13 03:01:59.744862 | orchestrator | 2026-02-13 03:01:59 | INFO  | Task b3a2508e-ac43-4ee8-a73c-769a5d368234 (frr) was prepared for execution. 2026-02-13 03:01:59.744984 | orchestrator | 2026-02-13 03:01:59 | INFO  | It takes a moment until task b3a2508e-ac43-4ee8-a73c-769a5d368234 (frr) has been started and output is visible here. 2026-02-13 03:02:26.904328 | orchestrator | 2026-02-13 03:02:26.904483 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-13 03:02:26.904504 | orchestrator | 2026-02-13 03:02:26.904517 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-13 03:02:26.904536 | orchestrator | Friday 13 February 2026 03:02:04 +0000 (0:00:00.461) 0:00:00.461 ******* 2026-02-13 03:02:26.904548 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 03:02:26.904560 | orchestrator | 2026-02-13 03:02:26.904571 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-13 03:02:26.904582 | orchestrator | Friday 13 February 2026 03:02:05 +0000 (0:00:00.404) 0:00:00.866 ******* 2026-02-13 03:02:26.904593 | orchestrator | changed: [testbed-manager] 2026-02-13 03:02:26.904605 | orchestrator | 2026-02-13 03:02:26.904616 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-13 03:02:26.904630 | orchestrator | Friday 13 February 2026 03:02:06 +0000 (0:00:01.293) 0:00:02.160 ******* 2026-02-13 03:02:26.904640 | orchestrator | changed: [testbed-manager] 2026-02-13 03:02:26.904651 | orchestrator | 2026-02-13 03:02:26.904662 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-13 03:02:26.904673 | orchestrator | Friday 13 February 2026 03:02:17 +0000 (0:00:11.364) 0:00:13.524 ******* 2026-02-13 03:02:26.904683 | orchestrator | ok: [testbed-manager] 2026-02-13 03:02:26.904695 | orchestrator | 2026-02-13 03:02:26.904706 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-13 03:02:26.904716 | orchestrator | Friday 13 February 2026 03:02:18 +0000 (0:00:01.178) 0:00:14.702 ******* 2026-02-13 03:02:26.904727 | orchestrator | changed: [testbed-manager] 2026-02-13 03:02:26.904737 | orchestrator | 2026-02-13 03:02:26.904748 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-13 03:02:26.904759 | orchestrator | Friday 13 February 2026 03:02:19 +0000 (0:00:01.085) 0:00:15.787 ******* 2026-02-13 03:02:26.904769 | orchestrator | ok: [testbed-manager] 2026-02-13 03:02:26.904780 | orchestrator | 2026-02-13 03:02:26.904791 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-13 03:02:26.904802 | orchestrator | Friday 13 February 2026 03:02:20 +0000 (0:00:00.955) 0:00:16.742 ******* 2026-02-13 03:02:26.904813 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:02:26.904824 | orchestrator | 2026-02-13 03:02:26.904834 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-13 03:02:26.904845 | orchestrator | Friday 13 February 2026 03:02:21 +0000 (0:00:00.149) 0:00:16.892 ******* 2026-02-13 03:02:26.904888 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:02:26.904909 | orchestrator | 2026-02-13 03:02:26.904928 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-13 03:02:26.904947 | orchestrator | Friday 13 February 2026 03:02:21 +0000 (0:00:00.126) 0:00:17.019 ******* 2026-02-13 03:02:26.904966 | orchestrator | changed: [testbed-manager] 2026-02-13 03:02:26.904984 | orchestrator | 2026-02-13 03:02:26.904997 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-13 03:02:26.905010 | orchestrator | Friday 13 February 2026 03:02:21 +0000 (0:00:00.784) 0:00:17.803 ******* 2026-02-13 03:02:26.905023 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-13 03:02:26.905035 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-13 03:02:26.905050 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-13 03:02:26.905062 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-13 03:02:26.905074 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-13 03:02:26.905087 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-13 03:02:26.905099 | orchestrator | 2026-02-13 03:02:26.905111 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-13 03:02:26.905124 | orchestrator | Friday 13 February 2026 03:02:23 +0000 (0:00:01.829) 0:00:19.633 ******* 2026-02-13 03:02:26.905136 | orchestrator | ok: [testbed-manager] 2026-02-13 03:02:26.905149 | orchestrator | 2026-02-13 03:02:26.905161 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-13 03:02:26.905173 | orchestrator | Friday 13 February 2026 03:02:25 +0000 (0:00:01.369) 0:00:21.002 ******* 2026-02-13 03:02:26.905186 | orchestrator | changed: [testbed-manager] 2026-02-13 03:02:26.905198 | orchestrator | 2026-02-13 03:02:26.905211 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:02:26.905223 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:02:26.905233 | orchestrator | 2026-02-13 03:02:26.905244 | orchestrator | 2026-02-13 03:02:26.905261 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:02:26.905272 | orchestrator | Friday 13 February 2026 03:02:26 +0000 (0:00:01.429) 0:00:22.432 ******* 2026-02-13 03:02:26.905283 | orchestrator | =============================================================================== 2026-02-13 03:02:26.905293 | orchestrator | osism.services.frr : Install frr package ------------------------------- 11.36s 2026-02-13 03:02:26.905304 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.83s 2026-02-13 03:02:26.905314 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2026-02-13 03:02:26.905325 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.37s 2026-02-13 03:02:26.905335 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.29s 2026-02-13 03:02:26.905380 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.18s 2026-02-13 03:02:26.905392 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.09s 2026-02-13 03:02:26.905425 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 0.96s 2026-02-13 03:02:26.905437 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 0.78s 2026-02-13 03:02:26.905448 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.40s 2026-02-13 03:02:26.905459 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-13 03:02:26.905470 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.13s 2026-02-13 03:02:27.247121 | orchestrator | + osism apply kubernetes 2026-02-13 03:02:29.255497 | orchestrator | 2026-02-13 03:02:29 | INFO  | Task 38e8df5b-835c-4337-b770-feff0bf09d0f (kubernetes) was prepared for execution. 2026-02-13 03:02:29.255601 | orchestrator | 2026-02-13 03:02:29 | INFO  | It takes a moment until task 38e8df5b-835c-4337-b770-feff0bf09d0f (kubernetes) has been started and output is visible here. 2026-02-13 03:02:54.851760 | orchestrator | 2026-02-13 03:02:54.851851 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-13 03:02:54.851862 | orchestrator | 2026-02-13 03:02:54.851869 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-13 03:02:54.851877 | orchestrator | Friday 13 February 2026 03:02:34 +0000 (0:00:00.176) 0:00:00.176 ******* 2026-02-13 03:02:54.851884 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:02:54.851891 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:02:54.851897 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:02:54.851903 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:02:54.851910 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:02:54.851916 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:02:54.851922 | orchestrator | 2026-02-13 03:02:54.851929 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-13 03:02:54.851935 | orchestrator | Friday 13 February 2026 03:02:35 +0000 (0:00:00.762) 0:00:00.939 ******* 2026-02-13 03:02:54.851942 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.851949 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.851955 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.851961 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.851967 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.851974 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.851980 | orchestrator | 2026-02-13 03:02:54.851986 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-13 03:02:54.851995 | orchestrator | Friday 13 February 2026 03:02:36 +0000 (0:00:00.541) 0:00:01.480 ******* 2026-02-13 03:02:54.852001 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852007 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852014 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852020 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852026 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852032 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852038 | orchestrator | 2026-02-13 03:02:54.852045 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-13 03:02:54.852051 | orchestrator | Friday 13 February 2026 03:02:36 +0000 (0:00:00.661) 0:00:02.142 ******* 2026-02-13 03:02:54.852058 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:02:54.852064 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:02:54.852070 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:02:54.852079 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:02:54.852086 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:02:54.852092 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:02:54.852098 | orchestrator | 2026-02-13 03:02:54.852104 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-13 03:02:54.852111 | orchestrator | Friday 13 February 2026 03:02:38 +0000 (0:00:01.631) 0:00:03.773 ******* 2026-02-13 03:02:54.852118 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:02:54.852124 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:02:54.852130 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:02:54.852136 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:02:54.852142 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:02:54.852149 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:02:54.852155 | orchestrator | 2026-02-13 03:02:54.852161 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-13 03:02:54.852168 | orchestrator | Friday 13 February 2026 03:02:40 +0000 (0:00:02.076) 0:00:05.849 ******* 2026-02-13 03:02:54.852174 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:02:54.852195 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:02:54.852202 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:02:54.852208 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:02:54.852214 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:02:54.852220 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:02:54.852226 | orchestrator | 2026-02-13 03:02:54.852238 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-13 03:02:54.852245 | orchestrator | Friday 13 February 2026 03:02:42 +0000 (0:00:01.864) 0:00:07.713 ******* 2026-02-13 03:02:54.852251 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852257 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852263 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852270 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852276 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852282 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852288 | orchestrator | 2026-02-13 03:02:54.852294 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-13 03:02:54.852301 | orchestrator | Friday 13 February 2026 03:02:43 +0000 (0:00:00.675) 0:00:08.389 ******* 2026-02-13 03:02:54.852307 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852313 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852319 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852325 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852331 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852337 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852343 | orchestrator | 2026-02-13 03:02:54.852377 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-13 03:02:54.852383 | orchestrator | Friday 13 February 2026 03:02:43 +0000 (0:00:00.741) 0:00:09.130 ******* 2026-02-13 03:02:54.852390 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852396 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852402 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852408 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852415 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852421 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852427 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852433 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852439 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852446 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852464 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852471 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852478 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852484 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852490 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852496 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 03:02:54.852503 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 03:02:54.852509 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852515 | orchestrator | 2026-02-13 03:02:54.852521 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-13 03:02:54.852527 | orchestrator | Friday 13 February 2026 03:02:44 +0000 (0:00:00.534) 0:00:09.664 ******* 2026-02-13 03:02:54.852533 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852540 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852546 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852558 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852564 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852570 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852576 | orchestrator | 2026-02-13 03:02:54.852583 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-13 03:02:54.852590 | orchestrator | Friday 13 February 2026 03:02:45 +0000 (0:00:01.086) 0:00:10.751 ******* 2026-02-13 03:02:54.852596 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:02:54.852603 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:02:54.852609 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:02:54.852615 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:02:54.852621 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:02:54.852627 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:02:54.852633 | orchestrator | 2026-02-13 03:02:54.852639 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-13 03:02:54.852646 | orchestrator | Friday 13 February 2026 03:02:46 +0000 (0:00:00.770) 0:00:11.522 ******* 2026-02-13 03:02:54.852652 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:02:54.852658 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:02:54.852665 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:02:54.852671 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:02:54.852677 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:02:54.852683 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:02:54.852689 | orchestrator | 2026-02-13 03:02:54.852695 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-13 03:02:54.852702 | orchestrator | Friday 13 February 2026 03:02:51 +0000 (0:00:05.324) 0:00:16.846 ******* 2026-02-13 03:02:54.852708 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852718 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852724 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852730 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852736 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852742 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852749 | orchestrator | 2026-02-13 03:02:54.852755 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-13 03:02:54.852761 | orchestrator | Friday 13 February 2026 03:02:52 +0000 (0:00:00.782) 0:00:17.628 ******* 2026-02-13 03:02:54.852767 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852773 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852780 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852786 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852792 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852798 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852804 | orchestrator | 2026-02-13 03:02:54.852810 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-13 03:02:54.852818 | orchestrator | Friday 13 February 2026 03:02:53 +0000 (0:00:01.110) 0:00:18.738 ******* 2026-02-13 03:02:54.852824 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852830 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852836 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852842 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852849 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852855 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.852861 | orchestrator | 2026-02-13 03:02:54.852867 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-13 03:02:54.852873 | orchestrator | Friday 13 February 2026 03:02:54 +0000 (0:00:00.579) 0:00:19.318 ******* 2026-02-13 03:02:54.852880 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-13 03:02:54.852890 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-13 03:02:54.852896 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:02:54.852902 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-13 03:02:54.852913 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-13 03:02:54.852920 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:02:54.852926 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-13 03:02:54.852932 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-13 03:02:54.852938 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:02:54.852944 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-13 03:02:54.852951 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-13 03:02:54.852957 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:02:54.852963 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-13 03:02:54.852969 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-13 03:02:54.852975 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:02:54.852981 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-13 03:02:54.852987 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-13 03:02:54.852994 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:02:54.853000 | orchestrator | 2026-02-13 03:02:54.853006 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-13 03:02:54.853016 | orchestrator | Friday 13 February 2026 03:02:54 +0000 (0:00:00.795) 0:00:20.113 ******* 2026-02-13 03:04:08.391132 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:04:08.391348 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:04:08.391375 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:04:08.391393 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.391410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.391425 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.391442 | orchestrator | 2026-02-13 03:04:08.391461 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-13 03:04:08.391479 | orchestrator | Friday 13 February 2026 03:02:55 +0000 (0:00:00.612) 0:00:20.726 ******* 2026-02-13 03:04:08.391497 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:04:08.391515 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:04:08.391532 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:04:08.391550 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.391568 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.391584 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.391602 | orchestrator | 2026-02-13 03:04:08.391620 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-13 03:04:08.391639 | orchestrator | 2026-02-13 03:04:08.391660 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-13 03:04:08.391681 | orchestrator | Friday 13 February 2026 03:02:56 +0000 (0:00:01.232) 0:00:21.958 ******* 2026-02-13 03:04:08.391702 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.391720 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.391738 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.391759 | orchestrator | 2026-02-13 03:04:08.391778 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-13 03:04:08.391798 | orchestrator | Friday 13 February 2026 03:02:58 +0000 (0:00:01.494) 0:00:23.453 ******* 2026-02-13 03:04:08.391819 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.391839 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.391857 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.391877 | orchestrator | 2026-02-13 03:04:08.391898 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-13 03:04:08.391918 | orchestrator | Friday 13 February 2026 03:02:59 +0000 (0:00:01.587) 0:00:25.040 ******* 2026-02-13 03:04:08.391938 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.391956 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.391977 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.391998 | orchestrator | 2026-02-13 03:04:08.392016 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-13 03:04:08.392069 | orchestrator | Friday 13 February 2026 03:03:00 +0000 (0:00:00.945) 0:00:25.986 ******* 2026-02-13 03:04:08.392087 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.392105 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.392123 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.392138 | orchestrator | 2026-02-13 03:04:08.392155 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-13 03:04:08.392171 | orchestrator | Friday 13 February 2026 03:03:01 +0000 (0:00:00.755) 0:00:26.741 ******* 2026-02-13 03:04:08.392187 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.392204 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.392247 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.392264 | orchestrator | 2026-02-13 03:04:08.392280 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-13 03:04:08.392316 | orchestrator | Friday 13 February 2026 03:03:01 +0000 (0:00:00.333) 0:00:27.074 ******* 2026-02-13 03:04:08.392333 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.392348 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:08.392364 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:08.392377 | orchestrator | 2026-02-13 03:04:08.392393 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-13 03:04:08.392408 | orchestrator | Friday 13 February 2026 03:03:02 +0000 (0:00:00.845) 0:00:27.920 ******* 2026-02-13 03:04:08.392423 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:08.392438 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:08.392453 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.392467 | orchestrator | 2026-02-13 03:04:08.392482 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-13 03:04:08.392498 | orchestrator | Friday 13 February 2026 03:03:03 +0000 (0:00:01.277) 0:00:29.197 ******* 2026-02-13 03:04:08.392514 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:04:08.392530 | orchestrator | 2026-02-13 03:04:08.392545 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-13 03:04:08.392559 | orchestrator | Friday 13 February 2026 03:03:04 +0000 (0:00:00.489) 0:00:29.686 ******* 2026-02-13 03:04:08.392574 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.392589 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.392604 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.392618 | orchestrator | 2026-02-13 03:04:08.392633 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-13 03:04:08.392648 | orchestrator | Friday 13 February 2026 03:03:05 +0000 (0:00:01.579) 0:00:31.265 ******* 2026-02-13 03:04:08.392663 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.392677 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.392692 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.392706 | orchestrator | 2026-02-13 03:04:08.392722 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-13 03:04:08.392737 | orchestrator | Friday 13 February 2026 03:03:06 +0000 (0:00:00.567) 0:00:31.833 ******* 2026-02-13 03:04:08.392754 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.392770 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.392786 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.392801 | orchestrator | 2026-02-13 03:04:08.392817 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-13 03:04:08.392833 | orchestrator | Friday 13 February 2026 03:03:07 +0000 (0:00:00.877) 0:00:32.710 ******* 2026-02-13 03:04:08.392850 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.392867 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.392885 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.392902 | orchestrator | 2026-02-13 03:04:08.392920 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-13 03:04:08.392979 | orchestrator | Friday 13 February 2026 03:03:08 +0000 (0:00:01.209) 0:00:33.919 ******* 2026-02-13 03:04:08.392999 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.393032 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.393048 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.393064 | orchestrator | 2026-02-13 03:04:08.393074 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-13 03:04:08.393084 | orchestrator | Friday 13 February 2026 03:03:09 +0000 (0:00:00.542) 0:00:34.461 ******* 2026-02-13 03:04:08.393094 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.393103 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.393113 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.393122 | orchestrator | 2026-02-13 03:04:08.393132 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-13 03:04:08.393141 | orchestrator | Friday 13 February 2026 03:03:09 +0000 (0:00:00.324) 0:00:34.786 ******* 2026-02-13 03:04:08.393151 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:08.393160 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:08.393169 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:08.393179 | orchestrator | 2026-02-13 03:04:08.393196 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-13 03:04:08.393206 | orchestrator | Friday 13 February 2026 03:03:10 +0000 (0:00:01.073) 0:00:35.860 ******* 2026-02-13 03:04:08.393247 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.393260 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.393270 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.393280 | orchestrator | 2026-02-13 03:04:08.393290 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-13 03:04:08.393299 | orchestrator | Friday 13 February 2026 03:03:13 +0000 (0:00:02.623) 0:00:38.483 ******* 2026-02-13 03:04:08.393309 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.393318 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.393406 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.393431 | orchestrator | 2026-02-13 03:04:08.393445 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-13 03:04:08.393458 | orchestrator | Friday 13 February 2026 03:03:13 +0000 (0:00:00.350) 0:00:38.833 ******* 2026-02-13 03:04:08.393471 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 03:04:08.393486 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 03:04:08.393499 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 03:04:08.393512 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 03:04:08.393525 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 03:04:08.393539 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 03:04:08.393551 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 03:04:08.393562 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 03:04:08.393573 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 03:04:08.393586 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-13 03:04:08.393597 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-13 03:04:08.393622 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-13 03:04:08.393634 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-13 03:04:08.393646 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-13 03:04:08.393658 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-13 03:04:08.393669 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:08.393680 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:08.393691 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:08.393702 | orchestrator | 2026-02-13 03:04:08.393721 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-13 03:04:08.393730 | orchestrator | Friday 13 February 2026 03:04:07 +0000 (0:00:53.666) 0:01:32.500 ******* 2026-02-13 03:04:08.393738 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:08.393746 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:08.393754 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:08.393761 | orchestrator | 2026-02-13 03:04:08.393769 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-13 03:04:08.393777 | orchestrator | Friday 13 February 2026 03:04:07 +0000 (0:00:00.257) 0:01:32.757 ******* 2026-02-13 03:04:08.393796 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.376738 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.376848 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.376860 | orchestrator | 2026-02-13 03:04:49.376870 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-13 03:04:49.376879 | orchestrator | Friday 13 February 2026 03:04:08 +0000 (0:00:00.899) 0:01:33.657 ******* 2026-02-13 03:04:49.376887 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.376894 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.376902 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.376909 | orchestrator | 2026-02-13 03:04:49.376916 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-13 03:04:49.376924 | orchestrator | Friday 13 February 2026 03:04:09 +0000 (0:00:01.092) 0:01:34.750 ******* 2026-02-13 03:04:49.376931 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.376938 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.376945 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.376952 | orchestrator | 2026-02-13 03:04:49.376960 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-13 03:04:49.376967 | orchestrator | Friday 13 February 2026 03:04:35 +0000 (0:00:25.734) 0:02:00.484 ******* 2026-02-13 03:04:49.376974 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.376982 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.376989 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.376996 | orchestrator | 2026-02-13 03:04:49.377004 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-13 03:04:49.377011 | orchestrator | Friday 13 February 2026 03:04:35 +0000 (0:00:00.593) 0:02:01.078 ******* 2026-02-13 03:04:49.377019 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.377026 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.377033 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.377040 | orchestrator | 2026-02-13 03:04:49.377047 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-13 03:04:49.377054 | orchestrator | Friday 13 February 2026 03:04:36 +0000 (0:00:00.639) 0:02:01.718 ******* 2026-02-13 03:04:49.377062 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.377112 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.377120 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.377128 | orchestrator | 2026-02-13 03:04:49.377135 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-13 03:04:49.377168 | orchestrator | Friday 13 February 2026 03:04:37 +0000 (0:00:00.596) 0:02:02.314 ******* 2026-02-13 03:04:49.377187 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.377199 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.377211 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.377223 | orchestrator | 2026-02-13 03:04:49.377235 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-13 03:04:49.377247 | orchestrator | Friday 13 February 2026 03:04:37 +0000 (0:00:00.746) 0:02:03.061 ******* 2026-02-13 03:04:49.377259 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.377271 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.377283 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.377296 | orchestrator | 2026-02-13 03:04:49.377309 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-13 03:04:49.377322 | orchestrator | Friday 13 February 2026 03:04:38 +0000 (0:00:00.303) 0:02:03.365 ******* 2026-02-13 03:04:49.377335 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.377353 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.377366 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.377377 | orchestrator | 2026-02-13 03:04:49.377388 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-13 03:04:49.377400 | orchestrator | Friday 13 February 2026 03:04:38 +0000 (0:00:00.630) 0:02:03.995 ******* 2026-02-13 03:04:49.377417 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.377432 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.377444 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.377456 | orchestrator | 2026-02-13 03:04:49.377468 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-13 03:04:49.377480 | orchestrator | Friday 13 February 2026 03:04:39 +0000 (0:00:00.609) 0:02:04.605 ******* 2026-02-13 03:04:49.377492 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.377504 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.377517 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.377529 | orchestrator | 2026-02-13 03:04:49.377542 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-13 03:04:49.377554 | orchestrator | Friday 13 February 2026 03:04:40 +0000 (0:00:00.832) 0:02:05.437 ******* 2026-02-13 03:04:49.377570 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:04:49.377580 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:04:49.377589 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:04:49.377597 | orchestrator | 2026-02-13 03:04:49.377606 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-13 03:04:49.377614 | orchestrator | Friday 13 February 2026 03:04:41 +0000 (0:00:01.017) 0:02:06.455 ******* 2026-02-13 03:04:49.377623 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:49.377631 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:49.377638 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:49.377645 | orchestrator | 2026-02-13 03:04:49.377652 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-13 03:04:49.377660 | orchestrator | Friday 13 February 2026 03:04:41 +0000 (0:00:00.281) 0:02:06.737 ******* 2026-02-13 03:04:49.377667 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:04:49.377674 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:04:49.377680 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:04:49.377687 | orchestrator | 2026-02-13 03:04:49.377695 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-13 03:04:49.377702 | orchestrator | Friday 13 February 2026 03:04:41 +0000 (0:00:00.286) 0:02:07.023 ******* 2026-02-13 03:04:49.377709 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.377716 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.377723 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.377730 | orchestrator | 2026-02-13 03:04:49.377737 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-13 03:04:49.377744 | orchestrator | Friday 13 February 2026 03:04:42 +0000 (0:00:00.614) 0:02:07.637 ******* 2026-02-13 03:04:49.377761 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:04:49.377768 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:04:49.377795 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:04:49.377815 | orchestrator | 2026-02-13 03:04:49.377829 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-13 03:04:49.377843 | orchestrator | Friday 13 February 2026 03:04:43 +0000 (0:00:00.801) 0:02:08.439 ******* 2026-02-13 03:04:49.377854 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 03:04:49.377865 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 03:04:49.377877 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 03:04:49.377887 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 03:04:49.377899 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 03:04:49.377911 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 03:04:49.377921 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 03:04:49.377933 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 03:04:49.377943 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 03:04:49.377953 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-13 03:04:49.377964 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 03:04:49.377975 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 03:04:49.377986 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-13 03:04:49.377996 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 03:04:49.378006 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 03:04:49.378106 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 03:04:49.378119 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 03:04:49.378131 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 03:04:49.378141 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 03:04:49.378153 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 03:04:49.378163 | orchestrator | 2026-02-13 03:04:49.378175 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-13 03:04:49.378185 | orchestrator | 2026-02-13 03:04:49.378197 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-13 03:04:49.378210 | orchestrator | Friday 13 February 2026 03:04:46 +0000 (0:00:03.172) 0:02:11.612 ******* 2026-02-13 03:04:49.378221 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:04:49.378232 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:04:49.378243 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:04:49.378255 | orchestrator | 2026-02-13 03:04:49.378286 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-13 03:04:49.378298 | orchestrator | Friday 13 February 2026 03:04:46 +0000 (0:00:00.334) 0:02:11.946 ******* 2026-02-13 03:04:49.378310 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:04:49.378321 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:04:49.378331 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:04:49.378352 | orchestrator | 2026-02-13 03:04:49.378364 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-13 03:04:49.378376 | orchestrator | Friday 13 February 2026 03:04:47 +0000 (0:00:00.853) 0:02:12.800 ******* 2026-02-13 03:04:49.378388 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:04:49.378399 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:04:49.378411 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:04:49.378422 | orchestrator | 2026-02-13 03:04:49.378433 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-13 03:04:49.378445 | orchestrator | Friday 13 February 2026 03:04:47 +0000 (0:00:00.326) 0:02:13.126 ******* 2026-02-13 03:04:49.378457 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:04:49.378470 | orchestrator | 2026-02-13 03:04:49.378481 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-13 03:04:49.378491 | orchestrator | Friday 13 February 2026 03:04:48 +0000 (0:00:00.473) 0:02:13.599 ******* 2026-02-13 03:04:49.378503 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:04:49.378514 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:04:49.378525 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:04:49.378537 | orchestrator | 2026-02-13 03:04:49.378549 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-13 03:04:49.378561 | orchestrator | Friday 13 February 2026 03:04:48 +0000 (0:00:00.514) 0:02:14.114 ******* 2026-02-13 03:04:49.378573 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:04:49.378584 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:04:49.378596 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:04:49.378607 | orchestrator | 2026-02-13 03:04:49.378619 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-13 03:04:49.378631 | orchestrator | Friday 13 February 2026 03:04:49 +0000 (0:00:00.343) 0:02:14.458 ******* 2026-02-13 03:04:49.378659 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:06:25.781136 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:06:25.781238 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:06:25.781247 | orchestrator | 2026-02-13 03:06:25.781254 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-13 03:06:25.781262 | orchestrator | Friday 13 February 2026 03:04:49 +0000 (0:00:00.327) 0:02:14.785 ******* 2026-02-13 03:06:25.781268 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:06:25.781274 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:06:25.781280 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:06:25.781286 | orchestrator | 2026-02-13 03:06:25.781292 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-13 03:06:25.781298 | orchestrator | Friday 13 February 2026 03:04:50 +0000 (0:00:00.618) 0:02:15.404 ******* 2026-02-13 03:06:25.781304 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:06:25.781309 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:06:25.781315 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:06:25.781321 | orchestrator | 2026-02-13 03:06:25.781326 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-13 03:06:25.781332 | orchestrator | Friday 13 February 2026 03:04:51 +0000 (0:00:01.359) 0:02:16.764 ******* 2026-02-13 03:06:25.781338 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:06:25.781344 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:06:25.781350 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:06:25.781355 | orchestrator | 2026-02-13 03:06:25.781361 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-13 03:06:25.781367 | orchestrator | Friday 13 February 2026 03:04:52 +0000 (0:00:01.256) 0:02:18.021 ******* 2026-02-13 03:06:25.781372 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:06:25.781378 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:06:25.781384 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:06:25.781390 | orchestrator | 2026-02-13 03:06:25.781396 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-13 03:06:25.781423 | orchestrator | 2026-02-13 03:06:25.781429 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-13 03:06:25.781434 | orchestrator | Friday 13 February 2026 03:05:03 +0000 (0:00:10.497) 0:02:28.518 ******* 2026-02-13 03:06:25.781440 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781447 | orchestrator | 2026-02-13 03:06:25.781452 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-13 03:06:25.781458 | orchestrator | Friday 13 February 2026 03:05:04 +0000 (0:00:00.777) 0:02:29.296 ******* 2026-02-13 03:06:25.781464 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781470 | orchestrator | 2026-02-13 03:06:25.781476 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-13 03:06:25.781482 | orchestrator | Friday 13 February 2026 03:05:04 +0000 (0:00:00.713) 0:02:30.009 ******* 2026-02-13 03:06:25.781487 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-13 03:06:25.781493 | orchestrator | 2026-02-13 03:06:25.781499 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-13 03:06:25.781505 | orchestrator | Friday 13 February 2026 03:05:05 +0000 (0:00:00.563) 0:02:30.573 ******* 2026-02-13 03:06:25.781510 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781516 | orchestrator | 2026-02-13 03:06:25.781522 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-13 03:06:25.781528 | orchestrator | Friday 13 February 2026 03:05:06 +0000 (0:00:00.918) 0:02:31.491 ******* 2026-02-13 03:06:25.781533 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781539 | orchestrator | 2026-02-13 03:06:25.781544 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-13 03:06:25.781550 | orchestrator | Friday 13 February 2026 03:05:06 +0000 (0:00:00.599) 0:02:32.091 ******* 2026-02-13 03:06:25.781557 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-13 03:06:25.781562 | orchestrator | 2026-02-13 03:06:25.781568 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-13 03:06:25.781574 | orchestrator | Friday 13 February 2026 03:05:08 +0000 (0:00:01.573) 0:02:33.665 ******* 2026-02-13 03:06:25.781579 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-13 03:06:25.781585 | orchestrator | 2026-02-13 03:06:25.781609 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-13 03:06:25.781615 | orchestrator | Friday 13 February 2026 03:05:09 +0000 (0:00:00.849) 0:02:34.514 ******* 2026-02-13 03:06:25.781621 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781626 | orchestrator | 2026-02-13 03:06:25.781632 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-13 03:06:25.781637 | orchestrator | Friday 13 February 2026 03:05:09 +0000 (0:00:00.447) 0:02:34.961 ******* 2026-02-13 03:06:25.781643 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781648 | orchestrator | 2026-02-13 03:06:25.781654 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-13 03:06:25.781659 | orchestrator | 2026-02-13 03:06:25.781665 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-13 03:06:25.781672 | orchestrator | Friday 13 February 2026 03:05:10 +0000 (0:00:00.431) 0:02:35.393 ******* 2026-02-13 03:06:25.781678 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781683 | orchestrator | 2026-02-13 03:06:25.781689 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-13 03:06:25.781694 | orchestrator | Friday 13 February 2026 03:05:10 +0000 (0:00:00.342) 0:02:35.735 ******* 2026-02-13 03:06:25.781700 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 03:06:25.781707 | orchestrator | 2026-02-13 03:06:25.781713 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-13 03:06:25.781719 | orchestrator | Friday 13 February 2026 03:05:10 +0000 (0:00:00.251) 0:02:35.987 ******* 2026-02-13 03:06:25.781725 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781730 | orchestrator | 2026-02-13 03:06:25.781741 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-13 03:06:25.781746 | orchestrator | Friday 13 February 2026 03:05:11 +0000 (0:00:00.842) 0:02:36.829 ******* 2026-02-13 03:06:25.781752 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781758 | orchestrator | 2026-02-13 03:06:25.781776 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-13 03:06:25.781782 | orchestrator | Friday 13 February 2026 03:05:13 +0000 (0:00:01.699) 0:02:38.529 ******* 2026-02-13 03:06:25.781788 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781794 | orchestrator | 2026-02-13 03:06:25.781800 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-13 03:06:25.781805 | orchestrator | Friday 13 February 2026 03:05:14 +0000 (0:00:00.909) 0:02:39.438 ******* 2026-02-13 03:06:25.781811 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781816 | orchestrator | 2026-02-13 03:06:25.781822 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-13 03:06:25.781828 | orchestrator | Friday 13 February 2026 03:05:14 +0000 (0:00:00.466) 0:02:39.905 ******* 2026-02-13 03:06:25.781834 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781839 | orchestrator | 2026-02-13 03:06:25.781845 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-13 03:06:25.781850 | orchestrator | Friday 13 February 2026 03:05:21 +0000 (0:00:07.272) 0:02:47.177 ******* 2026-02-13 03:06:25.781856 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:25.781862 | orchestrator | 2026-02-13 03:06:25.781867 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-13 03:06:25.781873 | orchestrator | Friday 13 February 2026 03:05:33 +0000 (0:00:11.690) 0:02:58.868 ******* 2026-02-13 03:06:25.781879 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:25.781885 | orchestrator | 2026-02-13 03:06:25.781890 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-13 03:06:25.781896 | orchestrator | 2026-02-13 03:06:25.781902 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-13 03:06:25.781907 | orchestrator | Friday 13 February 2026 03:05:34 +0000 (0:00:00.707) 0:02:59.576 ******* 2026-02-13 03:06:25.781913 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:06:25.781953 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:06:25.781959 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:06:25.781965 | orchestrator | 2026-02-13 03:06:25.781971 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-13 03:06:25.781976 | orchestrator | Friday 13 February 2026 03:05:34 +0000 (0:00:00.290) 0:02:59.866 ******* 2026-02-13 03:06:25.781982 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.781988 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:06:25.781993 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:06:25.781998 | orchestrator | 2026-02-13 03:06:25.782004 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-13 03:06:25.782009 | orchestrator | Friday 13 February 2026 03:05:34 +0000 (0:00:00.292) 0:03:00.159 ******* 2026-02-13 03:06:25.782050 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:06:25.782055 | orchestrator | 2026-02-13 03:06:25.782061 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-13 03:06:25.782066 | orchestrator | Friday 13 February 2026 03:05:35 +0000 (0:00:00.648) 0:03:00.807 ******* 2026-02-13 03:06:25.782073 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 03:06:25.782079 | orchestrator | 2026-02-13 03:06:25.782085 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-13 03:06:25.782090 | orchestrator | Friday 13 February 2026 03:05:36 +0000 (0:00:00.802) 0:03:01.610 ******* 2026-02-13 03:06:25.782096 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:06:25.782101 | orchestrator | 2026-02-13 03:06:25.782107 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-13 03:06:25.782117 | orchestrator | Friday 13 February 2026 03:05:37 +0000 (0:00:00.827) 0:03:02.437 ******* 2026-02-13 03:06:25.782123 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.782129 | orchestrator | 2026-02-13 03:06:25.782134 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-13 03:06:25.782140 | orchestrator | Friday 13 February 2026 03:05:37 +0000 (0:00:00.102) 0:03:02.540 ******* 2026-02-13 03:06:25.782145 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:06:25.782151 | orchestrator | 2026-02-13 03:06:25.782157 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-13 03:06:25.782162 | orchestrator | Friday 13 February 2026 03:05:38 +0000 (0:00:00.927) 0:03:03.467 ******* 2026-02-13 03:06:25.782168 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.782174 | orchestrator | 2026-02-13 03:06:25.782179 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-13 03:06:25.782185 | orchestrator | Friday 13 February 2026 03:05:38 +0000 (0:00:00.121) 0:03:03.588 ******* 2026-02-13 03:06:25.782190 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.782196 | orchestrator | 2026-02-13 03:06:25.782202 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-13 03:06:25.782207 | orchestrator | Friday 13 February 2026 03:05:38 +0000 (0:00:00.120) 0:03:03.709 ******* 2026-02-13 03:06:25.782213 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.782218 | orchestrator | 2026-02-13 03:06:25.782224 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-13 03:06:25.782234 | orchestrator | Friday 13 February 2026 03:05:38 +0000 (0:00:00.136) 0:03:03.845 ******* 2026-02-13 03:06:25.782240 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:25.782245 | orchestrator | 2026-02-13 03:06:25.782251 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-13 03:06:25.782256 | orchestrator | Friday 13 February 2026 03:05:38 +0000 (0:00:00.119) 0:03:03.965 ******* 2026-02-13 03:06:25.782261 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 03:06:25.782267 | orchestrator | 2026-02-13 03:06:25.782272 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-13 03:06:25.782277 | orchestrator | Friday 13 February 2026 03:05:43 +0000 (0:00:05.031) 0:03:08.996 ******* 2026-02-13 03:06:25.782283 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-13 03:06:25.782288 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-13 03:06:25.782301 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-13 03:06:48.198205 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-13 03:06:48.198326 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-13 03:06:48.198341 | orchestrator | 2026-02-13 03:06:48.198355 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-13 03:06:48.198367 | orchestrator | Friday 13 February 2026 03:06:25 +0000 (0:00:42.046) 0:03:51.043 ******* 2026-02-13 03:06:48.198379 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:06:48.198391 | orchestrator | 2026-02-13 03:06:48.198402 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-13 03:06:48.198413 | orchestrator | Friday 13 February 2026 03:06:26 +0000 (0:00:01.161) 0:03:52.204 ******* 2026-02-13 03:06:48.198425 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 03:06:48.198453 | orchestrator | 2026-02-13 03:06:48.198465 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-13 03:06:48.198476 | orchestrator | Friday 13 February 2026 03:06:28 +0000 (0:00:01.505) 0:03:53.710 ******* 2026-02-13 03:06:48.198487 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 03:06:48.198498 | orchestrator | 2026-02-13 03:06:48.198509 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-13 03:06:48.198520 | orchestrator | Friday 13 February 2026 03:06:29 +0000 (0:00:01.210) 0:03:54.920 ******* 2026-02-13 03:06:48.198557 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:48.198569 | orchestrator | 2026-02-13 03:06:48.198580 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-13 03:06:48.198591 | orchestrator | Friday 13 February 2026 03:06:29 +0000 (0:00:00.128) 0:03:55.048 ******* 2026-02-13 03:06:48.198601 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-13 03:06:48.198613 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-13 03:06:48.198624 | orchestrator | 2026-02-13 03:06:48.198635 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-13 03:06:48.198649 | orchestrator | Friday 13 February 2026 03:06:31 +0000 (0:00:01.825) 0:03:56.874 ******* 2026-02-13 03:06:48.198687 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:48.198712 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:06:48.198725 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:06:48.198738 | orchestrator | 2026-02-13 03:06:48.198751 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-13 03:06:48.198763 | orchestrator | Friday 13 February 2026 03:06:31 +0000 (0:00:00.297) 0:03:57.172 ******* 2026-02-13 03:06:48.198775 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:06:48.198788 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:06:48.198801 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:06:48.198813 | orchestrator | 2026-02-13 03:06:48.198826 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-13 03:06:48.198839 | orchestrator | 2026-02-13 03:06:48.198883 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-13 03:06:48.198948 | orchestrator | Friday 13 February 2026 03:06:32 +0000 (0:00:00.927) 0:03:58.099 ******* 2026-02-13 03:06:48.198961 | orchestrator | ok: [testbed-manager] 2026-02-13 03:06:48.198973 | orchestrator | 2026-02-13 03:06:48.198986 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-13 03:06:48.198999 | orchestrator | Friday 13 February 2026 03:06:33 +0000 (0:00:00.409) 0:03:58.509 ******* 2026-02-13 03:06:48.199010 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 03:06:48.199039 | orchestrator | 2026-02-13 03:06:48.199051 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-13 03:06:48.199061 | orchestrator | Friday 13 February 2026 03:06:33 +0000 (0:00:00.239) 0:03:58.749 ******* 2026-02-13 03:06:48.199072 | orchestrator | changed: [testbed-manager] 2026-02-13 03:06:48.199083 | orchestrator | 2026-02-13 03:06:48.199094 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-13 03:06:48.199105 | orchestrator | 2026-02-13 03:06:48.199116 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-13 03:06:48.199127 | orchestrator | Friday 13 February 2026 03:06:38 +0000 (0:00:05.296) 0:04:04.045 ******* 2026-02-13 03:06:48.199138 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:06:48.199149 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:06:48.199159 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:06:48.199170 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:06:48.199180 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:06:48.199215 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:06:48.199226 | orchestrator | 2026-02-13 03:06:48.199237 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-13 03:06:48.199248 | orchestrator | Friday 13 February 2026 03:06:39 +0000 (0:00:00.574) 0:04:04.619 ******* 2026-02-13 03:06:48.199259 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 03:06:48.199270 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 03:06:48.199281 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 03:06:48.199292 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 03:06:48.199311 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 03:06:48.199322 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 03:06:48.199333 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 03:06:48.199344 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 03:06:48.199355 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 03:06:48.199409 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 03:06:48.199422 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 03:06:48.199433 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 03:06:48.199444 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 03:06:48.199467 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 03:06:48.199478 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 03:06:48.199509 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 03:06:48.199520 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 03:06:48.199531 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 03:06:48.199542 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 03:06:48.199552 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 03:06:48.199580 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 03:06:48.199591 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 03:06:48.199601 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 03:06:48.199612 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 03:06:48.199623 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 03:06:48.199634 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 03:06:48.199644 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 03:06:48.199655 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 03:06:48.199666 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 03:06:48.199695 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 03:06:48.199706 | orchestrator | 2026-02-13 03:06:48.199717 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-13 03:06:48.199728 | orchestrator | Friday 13 February 2026 03:06:47 +0000 (0:00:07.737) 0:04:12.357 ******* 2026-02-13 03:06:48.199739 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:06:48.199750 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:06:48.199760 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:06:48.199791 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:48.199803 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:06:48.199813 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:06:48.199824 | orchestrator | 2026-02-13 03:06:48.199835 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-13 03:06:48.199846 | orchestrator | Friday 13 February 2026 03:06:47 +0000 (0:00:00.482) 0:04:12.839 ******* 2026-02-13 03:06:48.199857 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:06:48.199874 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:06:48.199917 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:06:48.199943 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:06:48.199954 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:06:48.199964 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:06:48.199975 | orchestrator | 2026-02-13 03:06:48.199986 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:06:48.200013 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:06:48.200027 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-13 03:06:48.200038 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 03:06:48.200049 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 03:06:48.200060 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 03:06:48.200071 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 03:06:48.200081 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 03:06:48.200092 | orchestrator | 2026-02-13 03:06:48.200123 | orchestrator | 2026-02-13 03:06:48.200134 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:06:48.200145 | orchestrator | Friday 13 February 2026 03:06:48 +0000 (0:00:00.608) 0:04:13.448 ******* 2026-02-13 03:06:48.200164 | orchestrator | =============================================================================== 2026-02-13 03:06:48.544671 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 53.67s 2026-02-13 03:06:48.544794 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.05s 2026-02-13 03:06:48.544811 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.73s 2026-02-13 03:06:48.544823 | orchestrator | kubectl : Install required packages ------------------------------------ 11.69s 2026-02-13 03:06:48.544834 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.50s 2026-02-13 03:06:48.544845 | orchestrator | Manage labels ----------------------------------------------------------- 7.74s 2026-02-13 03:06:48.544856 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.27s 2026-02-13 03:06:48.544867 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.32s 2026-02-13 03:06:48.544877 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.30s 2026-02-13 03:06:48.544921 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.03s 2026-02-13 03:06:48.544935 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.17s 2026-02-13 03:06:48.544947 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.62s 2026-02-13 03:06:48.544959 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.08s 2026-02-13 03:06:48.544969 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 1.86s 2026-02-13 03:06:48.544980 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.83s 2026-02-13 03:06:48.544991 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.70s 2026-02-13 03:06:48.545001 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.63s 2026-02-13 03:06:48.545039 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.59s 2026-02-13 03:06:48.545051 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 1.58s 2026-02-13 03:06:48.545062 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.57s 2026-02-13 03:06:48.805879 | orchestrator | + osism apply copy-kubeconfig 2026-02-13 03:07:00.865164 | orchestrator | 2026-02-13 03:07:00 | INFO  | Task e295d0f8-cc5e-42f6-b304-ac52d36f8ad1 (copy-kubeconfig) was prepared for execution. 2026-02-13 03:07:00.865284 | orchestrator | 2026-02-13 03:07:00 | INFO  | It takes a moment until task e295d0f8-cc5e-42f6-b304-ac52d36f8ad1 (copy-kubeconfig) has been started and output is visible here. 2026-02-13 03:07:07.003308 | orchestrator | 2026-02-13 03:07:07.003425 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-13 03:07:07.003443 | orchestrator | 2026-02-13 03:07:07.003456 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-13 03:07:07.003468 | orchestrator | Friday 13 February 2026 03:07:04 +0000 (0:00:00.113) 0:00:00.113 ******* 2026-02-13 03:07:07.003479 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-13 03:07:07.003490 | orchestrator | 2026-02-13 03:07:07.003502 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-13 03:07:07.003513 | orchestrator | Friday 13 February 2026 03:07:05 +0000 (0:00:00.692) 0:00:00.806 ******* 2026-02-13 03:07:07.003545 | orchestrator | changed: [testbed-manager] 2026-02-13 03:07:07.003558 | orchestrator | 2026-02-13 03:07:07.003569 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-13 03:07:07.003580 | orchestrator | Friday 13 February 2026 03:07:06 +0000 (0:00:01.071) 0:00:01.878 ******* 2026-02-13 03:07:07.003597 | orchestrator | changed: [testbed-manager] 2026-02-13 03:07:07.003608 | orchestrator | 2026-02-13 03:07:07.003622 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:07:07.003634 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:07:07.003646 | orchestrator | 2026-02-13 03:07:07.003657 | orchestrator | 2026-02-13 03:07:07.003668 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:07:07.003679 | orchestrator | Friday 13 February 2026 03:07:06 +0000 (0:00:00.403) 0:00:02.281 ******* 2026-02-13 03:07:07.003689 | orchestrator | =============================================================================== 2026-02-13 03:07:07.003700 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.07s 2026-02-13 03:07:07.003711 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2026-02-13 03:07:07.003722 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.40s 2026-02-13 03:07:07.185508 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-13 03:07:19.163070 | orchestrator | 2026-02-13 03:07:19 | INFO  | Task e74cbee8-077e-42fa-a800-6be7e3324b72 (openstackclient) was prepared for execution. 2026-02-13 03:07:19.163193 | orchestrator | 2026-02-13 03:07:19 | INFO  | It takes a moment until task e74cbee8-077e-42fa-a800-6be7e3324b72 (openstackclient) has been started and output is visible here. 2026-02-13 03:08:05.883288 | orchestrator | 2026-02-13 03:08:05.883400 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-13 03:08:05.883415 | orchestrator | 2026-02-13 03:08:05.883426 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-13 03:08:05.883437 | orchestrator | Friday 13 February 2026 03:07:23 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-02-13 03:08:05.883448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-13 03:08:05.883459 | orchestrator | 2026-02-13 03:08:05.883495 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-13 03:08:05.883505 | orchestrator | Friday 13 February 2026 03:07:23 +0000 (0:00:00.216) 0:00:00.442 ******* 2026-02-13 03:08:05.883515 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-13 03:08:05.883526 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-13 03:08:05.883536 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-13 03:08:05.883546 | orchestrator | 2026-02-13 03:08:05.883555 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-13 03:08:05.883565 | orchestrator | Friday 13 February 2026 03:07:24 +0000 (0:00:01.238) 0:00:01.681 ******* 2026-02-13 03:08:05.883575 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:05.883585 | orchestrator | 2026-02-13 03:08:05.883594 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-13 03:08:05.883604 | orchestrator | Friday 13 February 2026 03:07:26 +0000 (0:00:01.477) 0:00:03.158 ******* 2026-02-13 03:08:05.883614 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-13 03:08:05.883625 | orchestrator | ok: [testbed-manager] 2026-02-13 03:08:05.883635 | orchestrator | 2026-02-13 03:08:05.883645 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-13 03:08:05.883654 | orchestrator | Friday 13 February 2026 03:08:00 +0000 (0:00:34.393) 0:00:37.551 ******* 2026-02-13 03:08:05.883664 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:05.883673 | orchestrator | 2026-02-13 03:08:05.883683 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-13 03:08:05.883692 | orchestrator | Friday 13 February 2026 03:08:01 +0000 (0:00:00.961) 0:00:38.513 ******* 2026-02-13 03:08:05.883702 | orchestrator | ok: [testbed-manager] 2026-02-13 03:08:05.883711 | orchestrator | 2026-02-13 03:08:05.883721 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-13 03:08:05.883731 | orchestrator | Friday 13 February 2026 03:08:02 +0000 (0:00:00.661) 0:00:39.175 ******* 2026-02-13 03:08:05.883740 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:05.883750 | orchestrator | 2026-02-13 03:08:05.883760 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-13 03:08:05.883770 | orchestrator | Friday 13 February 2026 03:08:03 +0000 (0:00:01.485) 0:00:40.664 ******* 2026-02-13 03:08:05.883779 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:05.883789 | orchestrator | 2026-02-13 03:08:05.883799 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-13 03:08:05.883845 | orchestrator | Friday 13 February 2026 03:08:04 +0000 (0:00:00.718) 0:00:41.382 ******* 2026-02-13 03:08:05.883858 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:05.883869 | orchestrator | 2026-02-13 03:08:05.883880 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-13 03:08:05.883892 | orchestrator | Friday 13 February 2026 03:08:05 +0000 (0:00:00.565) 0:00:41.947 ******* 2026-02-13 03:08:05.883903 | orchestrator | ok: [testbed-manager] 2026-02-13 03:08:05.883913 | orchestrator | 2026-02-13 03:08:05.883925 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:08:05.883937 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:08:05.883949 | orchestrator | 2026-02-13 03:08:05.883961 | orchestrator | 2026-02-13 03:08:05.883972 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:08:05.883984 | orchestrator | Friday 13 February 2026 03:08:05 +0000 (0:00:00.387) 0:00:42.334 ******* 2026-02-13 03:08:05.883996 | orchestrator | =============================================================================== 2026-02-13 03:08:05.884007 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.39s 2026-02-13 03:08:05.884020 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.49s 2026-02-13 03:08:05.884039 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.48s 2026-02-13 03:08:05.884050 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.24s 2026-02-13 03:08:05.884062 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.96s 2026-02-13 03:08:05.884073 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.72s 2026-02-13 03:08:05.884085 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.66s 2026-02-13 03:08:05.884096 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.57s 2026-02-13 03:08:05.884106 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.39s 2026-02-13 03:08:05.884118 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.22s 2026-02-13 03:08:08.188978 | orchestrator | 2026-02-13 03:08:08 | INFO  | Task 445a0cf7-7d37-40bf-aac9-da65de27038f (common) was prepared for execution. 2026-02-13 03:08:08.189057 | orchestrator | 2026-02-13 03:08:08 | INFO  | It takes a moment until task 445a0cf7-7d37-40bf-aac9-da65de27038f (common) has been started and output is visible here. 2026-02-13 03:08:19.940305 | orchestrator | 2026-02-13 03:08:19.940448 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-13 03:08:19.940476 | orchestrator | 2026-02-13 03:08:19.940498 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-13 03:08:19.940516 | orchestrator | Friday 13 February 2026 03:08:12 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-02-13 03:08:19.940535 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:08:19.940558 | orchestrator | 2026-02-13 03:08:19.940577 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-13 03:08:19.940598 | orchestrator | Friday 13 February 2026 03:08:13 +0000 (0:00:01.160) 0:00:01.433 ******* 2026-02-13 03:08:19.940615 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940627 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940639 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940650 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940661 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940672 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940682 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940693 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940704 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 03:08:19.940736 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940749 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940759 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940770 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940781 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940792 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940873 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 03:08:19.940887 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940929 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940943 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940956 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940968 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 03:08:19.940982 | orchestrator | 2026-02-13 03:08:19.940995 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-13 03:08:19.941005 | orchestrator | Friday 13 February 2026 03:08:15 +0000 (0:00:02.488) 0:00:03.922 ******* 2026-02-13 03:08:19.941017 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:08:19.941029 | orchestrator | 2026-02-13 03:08:19.941040 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-13 03:08:19.941056 | orchestrator | Friday 13 February 2026 03:08:17 +0000 (0:00:01.263) 0:00:05.186 ******* 2026-02-13 03:08:19.941071 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:19.941192 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:19.941203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:19.941223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132245 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132302 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132348 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132532 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:21.132552 | orchestrator | 2026-02-13 03:08:21.132573 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-13 03:08:21.132592 | orchestrator | Friday 13 February 2026 03:08:20 +0000 (0:00:03.592) 0:00:08.778 ******* 2026-02-13 03:08:21.132615 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.132635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.132653 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.132672 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:08:21.132693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.132734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.681955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682127 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:08:21.682184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.682196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682213 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:08:21.682222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.682240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682257 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:08:21.682283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.682298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682307 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682315 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:08:21.682324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.682332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682340 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:21.682348 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:08:21.682357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:21.682371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500486 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:08:22.500502 | orchestrator | 2026-02-13 03:08:22.500525 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-13 03:08:22.500545 | orchestrator | Friday 13 February 2026 03:08:21 +0000 (0:00:00.850) 0:00:09.628 ******* 2026-02-13 03:08:22.500566 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:22.500590 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500619 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:08:22.500650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:22.500668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500717 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:08:22.500755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:22.500768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500791 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:08:22.500879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:22.500891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:22.500930 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:08:22.500942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:22.500973 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645011 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645125 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:08:27.645145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:27.645160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645185 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:08:27.645197 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 03:08:27.645237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:27.645262 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:08:27.645274 | orchestrator | 2026-02-13 03:08:27.645287 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-13 03:08:27.645299 | orchestrator | Friday 13 February 2026 03:08:23 +0000 (0:00:01.711) 0:00:11.340 ******* 2026-02-13 03:08:27.645311 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:08:27.645322 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:08:27.645334 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:08:27.645345 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:08:27.645373 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:08:27.645385 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:08:27.645396 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:08:27.645407 | orchestrator | 2026-02-13 03:08:27.645419 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-13 03:08:27.645430 | orchestrator | Friday 13 February 2026 03:08:24 +0000 (0:00:00.706) 0:00:12.046 ******* 2026-02-13 03:08:27.645441 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:08:27.645452 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:08:27.645463 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:08:27.645474 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:08:27.645486 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:08:27.645497 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:08:27.645508 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:08:27.645519 | orchestrator | 2026-02-13 03:08:27.645531 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-13 03:08:27.645542 | orchestrator | Friday 13 February 2026 03:08:25 +0000 (0:00:00.936) 0:00:12.983 ******* 2026-02-13 03:08:27.645554 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:27.645668 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:30.396321 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396517 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396528 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396569 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396583 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396662 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:30.396674 | orchestrator | 2026-02-13 03:08:30.396687 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-13 03:08:30.396699 | orchestrator | Friday 13 February 2026 03:08:28 +0000 (0:00:03.495) 0:00:16.479 ******* 2026-02-13 03:08:30.396710 | orchestrator | [WARNING]: Skipped 2026-02-13 03:08:30.396723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-13 03:08:30.396736 | orchestrator | to this access issue: 2026-02-13 03:08:30.396747 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-13 03:08:30.396758 | orchestrator | directory 2026-02-13 03:08:30.396769 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 03:08:30.396781 | orchestrator | 2026-02-13 03:08:30.396828 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-13 03:08:30.396840 | orchestrator | Friday 13 February 2026 03:08:29 +0000 (0:00:00.957) 0:00:17.436 ******* 2026-02-13 03:08:30.396853 | orchestrator | [WARNING]: Skipped 2026-02-13 03:08:30.396874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-13 03:08:40.216042 | orchestrator | to this access issue: 2026-02-13 03:08:40.216141 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-13 03:08:40.216153 | orchestrator | directory 2026-02-13 03:08:40.216163 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 03:08:40.216172 | orchestrator | 2026-02-13 03:08:40.216180 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-13 03:08:40.216189 | orchestrator | Friday 13 February 2026 03:08:30 +0000 (0:00:01.207) 0:00:18.643 ******* 2026-02-13 03:08:40.216217 | orchestrator | [WARNING]: Skipped 2026-02-13 03:08:40.216225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-13 03:08:40.216232 | orchestrator | to this access issue: 2026-02-13 03:08:40.216240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-13 03:08:40.216247 | orchestrator | directory 2026-02-13 03:08:40.216254 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 03:08:40.216262 | orchestrator | 2026-02-13 03:08:40.216269 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-13 03:08:40.216277 | orchestrator | Friday 13 February 2026 03:08:31 +0000 (0:00:00.864) 0:00:19.508 ******* 2026-02-13 03:08:40.216284 | orchestrator | [WARNING]: Skipped 2026-02-13 03:08:40.216291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-13 03:08:40.216298 | orchestrator | to this access issue: 2026-02-13 03:08:40.216306 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-13 03:08:40.216313 | orchestrator | directory 2026-02-13 03:08:40.216320 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 03:08:40.216327 | orchestrator | 2026-02-13 03:08:40.216335 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-13 03:08:40.216342 | orchestrator | Friday 13 February 2026 03:08:32 +0000 (0:00:00.838) 0:00:20.347 ******* 2026-02-13 03:08:40.216349 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:40.216357 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:08:40.216364 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:08:40.216371 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:08:40.216378 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:08:40.216385 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:08:40.216422 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:08:40.216431 | orchestrator | 2026-02-13 03:08:40.216438 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-13 03:08:40.216446 | orchestrator | Friday 13 February 2026 03:08:34 +0000 (0:00:02.571) 0:00:22.918 ******* 2026-02-13 03:08:40.216453 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216462 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216469 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216483 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216491 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216504 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 03:08:40.216511 | orchestrator | 2026-02-13 03:08:40.216519 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-13 03:08:40.216526 | orchestrator | Friday 13 February 2026 03:08:37 +0000 (0:00:02.092) 0:00:25.011 ******* 2026-02-13 03:08:40.216534 | orchestrator | changed: [testbed-manager] 2026-02-13 03:08:40.216541 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:08:40.216549 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:08:40.216556 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:08:40.216563 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:08:40.216570 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:08:40.216577 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:08:40.216584 | orchestrator | 2026-02-13 03:08:40.216593 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-13 03:08:40.216608 | orchestrator | Friday 13 February 2026 03:08:38 +0000 (0:00:01.860) 0:00:26.872 ******* 2026-02-13 03:08:40.216619 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:40.216644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:40.216656 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:40.216664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:40.216677 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:40.216694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:40.216707 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:40.216727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:40.216749 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:40.216772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862677 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:45.862831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:45.862849 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:45.862871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:45.862900 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862909 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:45.862917 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862947 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:08:45.862955 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862963 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862971 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:45.862984 | orchestrator | 2026-02-13 03:08:45.862997 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-13 03:08:45.863010 | orchestrator | Friday 13 February 2026 03:08:40 +0000 (0:00:01.470) 0:00:28.342 ******* 2026-02-13 03:08:45.863021 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863034 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863067 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863079 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863090 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863101 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 03:08:45.863111 | orchestrator | 2026-02-13 03:08:45.863122 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-13 03:08:45.863133 | orchestrator | Friday 13 February 2026 03:08:42 +0000 (0:00:01.856) 0:00:30.199 ******* 2026-02-13 03:08:45.863144 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863188 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863200 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863211 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863219 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 03:08:45.863227 | orchestrator | 2026-02-13 03:08:45.863236 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-13 03:08:45.863243 | orchestrator | Friday 13 February 2026 03:08:43 +0000 (0:00:01.632) 0:00:31.832 ******* 2026-02-13 03:08:45.863252 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:45.863272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.470866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.470973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.471010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.471037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.471050 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471062 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 03:08:46.471073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471139 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471151 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471163 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471176 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:08:46.471208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:10:07.388453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:10:07.388590 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:10:07.388606 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:10:07.388634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:10:07.388647 | orchestrator | 2026-02-13 03:10:07.388660 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-13 03:10:07.388673 | orchestrator | Friday 13 February 2026 03:08:46 +0000 (0:00:02.582) 0:00:34.414 ******* 2026-02-13 03:10:07.388692 | orchestrator | changed: [testbed-manager] 2026-02-13 03:10:07.388712 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:07.388756 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:07.388775 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:07.388794 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:10:07.388812 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:10:07.388831 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:10:07.388850 | orchestrator | 2026-02-13 03:10:07.388869 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-13 03:10:07.388887 | orchestrator | Friday 13 February 2026 03:08:47 +0000 (0:00:01.354) 0:00:35.769 ******* 2026-02-13 03:10:07.388900 | orchestrator | changed: [testbed-manager] 2026-02-13 03:10:07.388911 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:07.388921 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:07.388932 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:07.388942 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:10:07.388953 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:10:07.388964 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:10:07.388975 | orchestrator | 2026-02-13 03:10:07.388988 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389001 | orchestrator | Friday 13 February 2026 03:08:48 +0000 (0:00:01.066) 0:00:36.836 ******* 2026-02-13 03:10:07.389014 | orchestrator | 2026-02-13 03:10:07.389026 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389039 | orchestrator | Friday 13 February 2026 03:08:48 +0000 (0:00:00.064) 0:00:36.901 ******* 2026-02-13 03:10:07.389051 | orchestrator | 2026-02-13 03:10:07.389063 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389077 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.063) 0:00:36.964 ******* 2026-02-13 03:10:07.389090 | orchestrator | 2026-02-13 03:10:07.389102 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389115 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.063) 0:00:37.027 ******* 2026-02-13 03:10:07.389127 | orchestrator | 2026-02-13 03:10:07.389140 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389164 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.225) 0:00:37.252 ******* 2026-02-13 03:10:07.389177 | orchestrator | 2026-02-13 03:10:07.389190 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389202 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.063) 0:00:37.315 ******* 2026-02-13 03:10:07.389215 | orchestrator | 2026-02-13 03:10:07.389228 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 03:10:07.389241 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.061) 0:00:37.377 ******* 2026-02-13 03:10:07.389253 | orchestrator | 2026-02-13 03:10:07.389266 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-13 03:10:07.389278 | orchestrator | Friday 13 February 2026 03:08:49 +0000 (0:00:00.092) 0:00:37.470 ******* 2026-02-13 03:10:07.389291 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:07.389303 | orchestrator | changed: [testbed-manager] 2026-02-13 03:10:07.389316 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:07.389329 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:07.389341 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:10:07.389371 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:10:07.389383 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:10:07.389394 | orchestrator | 2026-02-13 03:10:07.389405 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-13 03:10:07.389416 | orchestrator | Friday 13 February 2026 03:09:24 +0000 (0:00:34.866) 0:01:12.336 ******* 2026-02-13 03:10:07.389427 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:07.389438 | orchestrator | changed: [testbed-manager] 2026-02-13 03:10:07.389448 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:10:07.389459 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:07.389470 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:10:07.389480 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:07.389491 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:10:07.389502 | orchestrator | 2026-02-13 03:10:07.389513 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-13 03:10:07.389523 | orchestrator | Friday 13 February 2026 03:09:56 +0000 (0:00:32.574) 0:01:44.910 ******* 2026-02-13 03:10:07.389534 | orchestrator | ok: [testbed-manager] 2026-02-13 03:10:07.389546 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:10:07.389557 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:10:07.389568 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:10:07.389579 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:10:07.389589 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:10:07.389600 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:10:07.389611 | orchestrator | 2026-02-13 03:10:07.389621 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-13 03:10:07.389632 | orchestrator | Friday 13 February 2026 03:09:58 +0000 (0:00:01.857) 0:01:46.768 ******* 2026-02-13 03:10:07.389643 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:07.389654 | orchestrator | changed: [testbed-manager] 2026-02-13 03:10:07.389665 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:07.389678 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:07.389697 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:10:07.389714 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:10:07.389771 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:10:07.389792 | orchestrator | 2026-02-13 03:10:07.389810 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:10:07.389824 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389837 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389857 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389878 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389888 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389899 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389910 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 03:10:07.389921 | orchestrator | 2026-02-13 03:10:07.389932 | orchestrator | 2026-02-13 03:10:07.389943 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:10:07.389954 | orchestrator | Friday 13 February 2026 03:10:07 +0000 (0:00:08.544) 0:01:55.313 ******* 2026-02-13 03:10:07.389965 | orchestrator | =============================================================================== 2026-02-13 03:10:07.389976 | orchestrator | common : Restart fluentd container ------------------------------------- 34.87s 2026-02-13 03:10:07.389986 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.57s 2026-02-13 03:10:07.389997 | orchestrator | common : Restart cron container ----------------------------------------- 8.54s 2026-02-13 03:10:07.390008 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.59s 2026-02-13 03:10:07.390084 | orchestrator | common : Copying over config.json files for services -------------------- 3.50s 2026-02-13 03:10:07.390096 | orchestrator | common : Check common containers ---------------------------------------- 2.58s 2026-02-13 03:10:07.390107 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.57s 2026-02-13 03:10:07.390117 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.49s 2026-02-13 03:10:07.390128 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.09s 2026-02-13 03:10:07.390138 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.86s 2026-02-13 03:10:07.390149 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.86s 2026-02-13 03:10:07.390160 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.86s 2026-02-13 03:10:07.390170 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.71s 2026-02-13 03:10:07.390181 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.63s 2026-02-13 03:10:07.390191 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.47s 2026-02-13 03:10:07.390202 | orchestrator | common : Creating log volume -------------------------------------------- 1.35s 2026-02-13 03:10:07.390223 | orchestrator | common : include_tasks -------------------------------------------------- 1.26s 2026-02-13 03:10:07.822969 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.21s 2026-02-13 03:10:07.823059 | orchestrator | common : include_tasks -------------------------------------------------- 1.16s 2026-02-13 03:10:07.823070 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.07s 2026-02-13 03:10:10.130691 | orchestrator | 2026-02-13 03:10:10 | INFO  | Task 9e095f71-46f9-4819-b919-46ff6742d9d3 (loadbalancer) was prepared for execution. 2026-02-13 03:10:10.130855 | orchestrator | 2026-02-13 03:10:10 | INFO  | It takes a moment until task 9e095f71-46f9-4819-b919-46ff6742d9d3 (loadbalancer) has been started and output is visible here. 2026-02-13 03:10:23.789650 | orchestrator | 2026-02-13 03:10:23.789824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:10:23.789845 | orchestrator | 2026-02-13 03:10:23.789857 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:10:23.789869 | orchestrator | Friday 13 February 2026 03:10:14 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-02-13 03:10:23.789906 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:10:23.789919 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:10:23.789930 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:10:23.789941 | orchestrator | 2026-02-13 03:10:23.789952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:10:23.789963 | orchestrator | Friday 13 February 2026 03:10:14 +0000 (0:00:00.307) 0:00:00.551 ******* 2026-02-13 03:10:23.789974 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-13 03:10:23.789985 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-13 03:10:23.789995 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-13 03:10:23.790006 | orchestrator | 2026-02-13 03:10:23.790088 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-13 03:10:23.790103 | orchestrator | 2026-02-13 03:10:23.790114 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-13 03:10:23.790138 | orchestrator | Friday 13 February 2026 03:10:14 +0000 (0:00:00.451) 0:00:01.002 ******* 2026-02-13 03:10:23.790150 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:10:23.790161 | orchestrator | 2026-02-13 03:10:23.790172 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-13 03:10:23.790184 | orchestrator | Friday 13 February 2026 03:10:15 +0000 (0:00:00.538) 0:00:01.541 ******* 2026-02-13 03:10:23.790197 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:10:23.790209 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:10:23.790221 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:10:23.790233 | orchestrator | 2026-02-13 03:10:23.790246 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-13 03:10:23.790259 | orchestrator | Friday 13 February 2026 03:10:16 +0000 (0:00:00.621) 0:00:02.162 ******* 2026-02-13 03:10:23.790271 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:10:23.790283 | orchestrator | 2026-02-13 03:10:23.790295 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-13 03:10:23.790307 | orchestrator | Friday 13 February 2026 03:10:16 +0000 (0:00:00.663) 0:00:02.825 ******* 2026-02-13 03:10:23.790320 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:10:23.790332 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:10:23.790343 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:10:23.790353 | orchestrator | 2026-02-13 03:10:23.790364 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-13 03:10:23.790375 | orchestrator | Friday 13 February 2026 03:10:17 +0000 (0:00:00.593) 0:00:03.419 ******* 2026-02-13 03:10:23.790386 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790397 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790408 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790419 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790429 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790440 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 03:10:23.790450 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 03:10:23.790462 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 03:10:23.790473 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 03:10:23.790484 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 03:10:23.790503 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 03:10:23.790514 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 03:10:23.790525 | orchestrator | 2026-02-13 03:10:23.790535 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-13 03:10:23.790546 | orchestrator | Friday 13 February 2026 03:10:19 +0000 (0:00:02.049) 0:00:05.469 ******* 2026-02-13 03:10:23.790557 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-13 03:10:23.790568 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-13 03:10:23.790579 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-13 03:10:23.790590 | orchestrator | 2026-02-13 03:10:23.790601 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-13 03:10:23.790612 | orchestrator | Friday 13 February 2026 03:10:20 +0000 (0:00:00.705) 0:00:06.174 ******* 2026-02-13 03:10:23.790623 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-13 03:10:23.790633 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-13 03:10:23.790644 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-13 03:10:23.790655 | orchestrator | 2026-02-13 03:10:23.790665 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-13 03:10:23.790676 | orchestrator | Friday 13 February 2026 03:10:21 +0000 (0:00:01.231) 0:00:07.406 ******* 2026-02-13 03:10:23.790687 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-13 03:10:23.790698 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:23.790790 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-13 03:10:23.790805 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:23.790816 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-13 03:10:23.790827 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:23.790838 | orchestrator | 2026-02-13 03:10:23.790849 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-13 03:10:23.790860 | orchestrator | Friday 13 February 2026 03:10:21 +0000 (0:00:00.526) 0:00:07.933 ******* 2026-02-13 03:10:23.790880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:23.790898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:23.790909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:23.790929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:23.790941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:23.790961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:28.885511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:28.885632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:28.885649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:28.885662 | orchestrator | 2026-02-13 03:10:28.885676 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-13 03:10:28.885690 | orchestrator | Friday 13 February 2026 03:10:23 +0000 (0:00:01.838) 0:00:09.771 ******* 2026-02-13 03:10:28.885702 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:28.885782 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:28.885797 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:28.885809 | orchestrator | 2026-02-13 03:10:28.885821 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-13 03:10:28.885832 | orchestrator | Friday 13 February 2026 03:10:24 +0000 (0:00:00.915) 0:00:10.686 ******* 2026-02-13 03:10:28.885844 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-13 03:10:28.885855 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-13 03:10:28.885866 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-13 03:10:28.885876 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-13 03:10:28.885887 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-13 03:10:28.885898 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-13 03:10:28.885909 | orchestrator | 2026-02-13 03:10:28.885919 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-13 03:10:28.885930 | orchestrator | Friday 13 February 2026 03:10:26 +0000 (0:00:01.389) 0:00:12.076 ******* 2026-02-13 03:10:28.885941 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:10:28.885952 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:10:28.885962 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:10:28.885973 | orchestrator | 2026-02-13 03:10:28.885984 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-13 03:10:28.885997 | orchestrator | Friday 13 February 2026 03:10:26 +0000 (0:00:00.873) 0:00:12.950 ******* 2026-02-13 03:10:28.886010 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:10:28.886078 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:10:28.886090 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:10:28.886102 | orchestrator | 2026-02-13 03:10:28.886115 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-13 03:10:28.886127 | orchestrator | Friday 13 February 2026 03:10:28 +0000 (0:00:01.323) 0:00:14.274 ******* 2026-02-13 03:10:28.886141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:28.886177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:28.886191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:28.886205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:28.886230 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:28.886244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:28.886294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:28.886308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:28.886322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:28.886338 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:28.886368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:31.626816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:31.626928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:31.626940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:31.626949 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:31.626958 | orchestrator | 2026-02-13 03:10:31.626967 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-13 03:10:31.626975 | orchestrator | Friday 13 February 2026 03:10:28 +0000 (0:00:00.597) 0:00:14.871 ******* 2026-02-13 03:10:31.626983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:31.626991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:31.626999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:31.627044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:31.627053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:31.627061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:31.627068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:31.627076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:31.627083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:31.627107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:39.671343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a', '__omit_place_holder__fb609f99a22a45f251504a7413db2c261b2cb21a'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 03:10:39.671359 | orchestrator | 2026-02-13 03:10:39.671373 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-13 03:10:39.671386 | orchestrator | Friday 13 February 2026 03:10:31 +0000 (0:00:02.739) 0:00:17.611 ******* 2026-02-13 03:10:39.671398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:39.671530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:39.671542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:39.671553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:39.671564 | orchestrator | 2026-02-13 03:10:39.671575 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-13 03:10:39.671586 | orchestrator | Friday 13 February 2026 03:10:34 +0000 (0:00:03.042) 0:00:20.653 ******* 2026-02-13 03:10:39.671608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 03:10:39.671621 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 03:10:39.671631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 03:10:39.671642 | orchestrator | 2026-02-13 03:10:39.671653 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-13 03:10:39.671664 | orchestrator | Friday 13 February 2026 03:10:36 +0000 (0:00:01.767) 0:00:22.421 ******* 2026-02-13 03:10:39.671675 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 03:10:39.671686 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 03:10:39.671696 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 03:10:39.671707 | orchestrator | 2026-02-13 03:10:39.671809 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-13 03:10:39.671827 | orchestrator | Friday 13 February 2026 03:10:39 +0000 (0:00:02.692) 0:00:25.114 ******* 2026-02-13 03:10:39.671841 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:39.671853 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:39.671864 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:39.671875 | orchestrator | 2026-02-13 03:10:39.671895 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-13 03:10:50.928963 | orchestrator | Friday 13 February 2026 03:10:39 +0000 (0:00:00.543) 0:00:25.658 ******* 2026-02-13 03:10:50.929106 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 03:10:50.929150 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 03:10:50.929170 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 03:10:50.929189 | orchestrator | 2026-02-13 03:10:50.929211 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-13 03:10:50.929230 | orchestrator | Friday 13 February 2026 03:10:41 +0000 (0:00:02.015) 0:00:27.673 ******* 2026-02-13 03:10:50.929249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 03:10:50.929268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 03:10:50.929287 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 03:10:50.929305 | orchestrator | 2026-02-13 03:10:50.929324 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-13 03:10:50.929343 | orchestrator | Friday 13 February 2026 03:10:43 +0000 (0:00:02.088) 0:00:29.762 ******* 2026-02-13 03:10:50.929363 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-13 03:10:50.929382 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-13 03:10:50.929400 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-13 03:10:50.929418 | orchestrator | 2026-02-13 03:10:50.929453 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-13 03:10:50.929474 | orchestrator | Friday 13 February 2026 03:10:45 +0000 (0:00:01.345) 0:00:31.108 ******* 2026-02-13 03:10:50.929493 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-13 03:10:50.929514 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-13 03:10:50.929532 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-13 03:10:50.929551 | orchestrator | 2026-02-13 03:10:50.929605 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-13 03:10:50.929625 | orchestrator | Friday 13 February 2026 03:10:46 +0000 (0:00:01.381) 0:00:32.489 ******* 2026-02-13 03:10:50.929643 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:10:50.929662 | orchestrator | 2026-02-13 03:10:50.929680 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-13 03:10:50.929699 | orchestrator | Friday 13 February 2026 03:10:46 +0000 (0:00:00.503) 0:00:32.993 ******* 2026-02-13 03:10:50.929749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:10:50.929876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:50.929889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:50.929900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:10:50.929911 | orchestrator | 2026-02-13 03:10:50.929923 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-13 03:10:50.929934 | orchestrator | Friday 13 February 2026 03:10:50 +0000 (0:00:03.340) 0:00:36.333 ******* 2026-02-13 03:10:50.929958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:51.717495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:51.717600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:51.717644 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:51.717660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:51.717673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:51.717684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:51.717695 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:51.717750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:51.717785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:51.717798 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:51.717819 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:51.717830 | orchestrator | 2026-02-13 03:10:51.717843 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-13 03:10:51.717855 | orchestrator | Friday 13 February 2026 03:10:50 +0000 (0:00:00.586) 0:00:36.919 ******* 2026-02-13 03:10:51.717867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:51.717879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:51.717891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:51.717902 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:51.717913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:51.717937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:52.517124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:52.517242 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:52.517255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:52.517266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:52.517274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:52.517281 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:52.517289 | orchestrator | 2026-02-13 03:10:52.517298 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-13 03:10:52.517306 | orchestrator | Friday 13 February 2026 03:10:51 +0000 (0:00:00.785) 0:00:37.704 ******* 2026-02-13 03:10:52.517314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:52.517322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:52.517344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:52.517358 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:52.517366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:52.517373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:52.517381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:52.517389 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:52.517396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:52.517429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:52.517441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:52.517459 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:53.841330 | orchestrator | 2026-02-13 03:10:53.841463 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-13 03:10:53.841481 | orchestrator | Friday 13 February 2026 03:10:52 +0000 (0:00:00.791) 0:00:38.496 ******* 2026-02-13 03:10:53.841496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:53.841512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:53.841525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:53.841537 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:53.841550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:53.841562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:53.841600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:53.841636 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:53.841669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:53.841682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:53.841694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:53.841705 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:53.841746 | orchestrator | 2026-02-13 03:10:53.841759 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-13 03:10:53.841771 | orchestrator | Friday 13 February 2026 03:10:53 +0000 (0:00:00.571) 0:00:39.067 ******* 2026-02-13 03:10:53.841782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:53.841793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:53.841825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:53.841838 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:53.841859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:54.823176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:54.823309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:54.823337 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:54.823361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:54.823382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:54.823403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:54.823443 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:54.823456 | orchestrator | 2026-02-13 03:10:54.823468 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-13 03:10:54.823481 | orchestrator | Friday 13 February 2026 03:10:53 +0000 (0:00:00.757) 0:00:39.825 ******* 2026-02-13 03:10:54.823507 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:54.823541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:54.823553 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:54.823565 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:54.823576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:54.823587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:54.823607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:54.823619 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:54.823635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:54.823653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:56.134577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:56.134689 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:56.134707 | orchestrator | 2026-02-13 03:10:56.134770 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-13 03:10:56.134784 | orchestrator | Friday 13 February 2026 03:10:54 +0000 (0:00:00.981) 0:00:40.806 ******* 2026-02-13 03:10:56.134797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:56.134811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:56.134845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:56.134858 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:10:56.134870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:10:56.134897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:56.134928 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:56.134940 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:10:56.134951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:10:56.134963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:56.134982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:10:56.134993 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:10:56.135004 | orchestrator | 2026-02-13 03:10:56.135015 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-13 03:10:56.135026 | orchestrator | Friday 13 February 2026 03:10:55 +0000 (0:00:00.572) 0:00:41.379 ******* 2026-02-13 03:10:56.135037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 03:10:56.135049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:10:56.135076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:11:02.567116 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:02.567247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 03:11:02.567268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:11:02.567306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:11:02.567319 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:02.567332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 03:11:02.567358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 03:11:02.567371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 03:11:02.567382 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:02.567393 | orchestrator | 2026-02-13 03:11:02.567406 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-13 03:11:02.567418 | orchestrator | Friday 13 February 2026 03:10:56 +0000 (0:00:00.742) 0:00:42.122 ******* 2026-02-13 03:11:02.567429 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 03:11:02.567461 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 03:11:02.567472 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 03:11:02.567483 | orchestrator | 2026-02-13 03:11:02.567494 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-13 03:11:02.567507 | orchestrator | Friday 13 February 2026 03:10:57 +0000 (0:00:01.646) 0:00:43.768 ******* 2026-02-13 03:11:02.567518 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 03:11:02.567530 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 03:11:02.567540 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 03:11:02.567551 | orchestrator | 2026-02-13 03:11:02.567573 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-13 03:11:02.567584 | orchestrator | Friday 13 February 2026 03:10:59 +0000 (0:00:01.726) 0:00:45.494 ******* 2026-02-13 03:11:02.567595 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 03:11:02.567605 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 03:11:02.567617 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 03:11:02.567628 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 03:11:02.567638 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:02.567652 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 03:11:02.567664 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:02.567677 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 03:11:02.567690 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:02.567702 | orchestrator | 2026-02-13 03:11:02.567744 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-13 03:11:02.567758 | orchestrator | Friday 13 February 2026 03:11:00 +0000 (0:00:00.781) 0:00:46.276 ******* 2026-02-13 03:11:02.567772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 03:11:02.567787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 03:11:02.567806 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 03:11:02.567830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:11:06.550101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:11:06.550183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 03:11:06.550193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:11:06.550200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:11:06.550206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 03:11:06.550212 | orchestrator | 2026-02-13 03:11:06.550233 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-13 03:11:06.550241 | orchestrator | Friday 13 February 2026 03:11:02 +0000 (0:00:02.279) 0:00:48.556 ******* 2026-02-13 03:11:06.550248 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:06.550253 | orchestrator | 2026-02-13 03:11:06.550259 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-13 03:11:06.550265 | orchestrator | Friday 13 February 2026 03:11:03 +0000 (0:00:00.760) 0:00:49.317 ******* 2026-02-13 03:11:06.550285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 03:11:06.550310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:06.550317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:06.550323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:06.550329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 03:11:06.550339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:06.550345 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:06.550363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 03:11:07.216281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:07.216293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216328 | orchestrator | 2026-02-13 03:11:07.216338 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-13 03:11:07.216348 | orchestrator | Friday 13 February 2026 03:11:06 +0000 (0:00:03.217) 0:00:52.534 ******* 2026-02-13 03:11:07.216357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 03:11:07.216402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:07.216412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216429 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:07.216438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 03:11:07.216450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:07.216464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:07.216481 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:07.216495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 03:11:15.381419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 03:11:15.381503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.381512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.381532 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:15.381540 | orchestrator | 2026-02-13 03:11:15.381546 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-13 03:11:15.381552 | orchestrator | Friday 13 February 2026 03:11:07 +0000 (0:00:00.671) 0:00:53.206 ******* 2026-02-13 03:11:15.381558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381572 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:15.381588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381597 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:15.381602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-13 03:11:15.381611 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:15.381616 | orchestrator | 2026-02-13 03:11:15.381621 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-13 03:11:15.381625 | orchestrator | Friday 13 February 2026 03:11:08 +0000 (0:00:01.052) 0:00:54.259 ******* 2026-02-13 03:11:15.381630 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:15.381635 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:15.381639 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:15.381644 | orchestrator | 2026-02-13 03:11:15.381649 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-13 03:11:15.381654 | orchestrator | Friday 13 February 2026 03:11:09 +0000 (0:00:01.273) 0:00:55.532 ******* 2026-02-13 03:11:15.381658 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:15.381663 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:15.381668 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:15.381672 | orchestrator | 2026-02-13 03:11:15.381676 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-13 03:11:15.381681 | orchestrator | Friday 13 February 2026 03:11:11 +0000 (0:00:01.917) 0:00:57.449 ******* 2026-02-13 03:11:15.381686 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:15.381690 | orchestrator | 2026-02-13 03:11:15.381737 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-13 03:11:15.381744 | orchestrator | Friday 13 February 2026 03:11:12 +0000 (0:00:00.638) 0:00:58.088 ******* 2026-02-13 03:11:15.381750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:15.381765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.381772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.381777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:15.381782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.381792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:15.946435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946480 | orchestrator | 2026-02-13 03:11:15.946498 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-13 03:11:15.946517 | orchestrator | Friday 13 February 2026 03:11:15 +0000 (0:00:03.277) 0:01:01.366 ******* 2026-02-13 03:11:15.946533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 03:11:15.946551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946637 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:15.946662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 03:11:15.946678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:15.946738 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:15.946748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 03:11:15.946778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 03:11:25.078803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:25.078921 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:25.078941 | orchestrator | 2026-02-13 03:11:25.078954 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-13 03:11:25.078967 | orchestrator | Friday 13 February 2026 03:11:15 +0000 (0:00:00.566) 0:01:01.932 ******* 2026-02-13 03:11:25.078997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079023 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:25.079034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079056 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:25.079067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-13 03:11:25.079089 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:25.079100 | orchestrator | 2026-02-13 03:11:25.079111 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-13 03:11:25.079122 | orchestrator | Friday 13 February 2026 03:11:16 +0000 (0:00:00.791) 0:01:02.723 ******* 2026-02-13 03:11:25.079133 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:25.079144 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:25.079155 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:25.079166 | orchestrator | 2026-02-13 03:11:25.079177 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-13 03:11:25.079188 | orchestrator | Friday 13 February 2026 03:11:18 +0000 (0:00:01.515) 0:01:04.238 ******* 2026-02-13 03:11:25.079224 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:25.079236 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:25.079247 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:25.079257 | orchestrator | 2026-02-13 03:11:25.079269 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-13 03:11:25.079282 | orchestrator | Friday 13 February 2026 03:11:20 +0000 (0:00:01.955) 0:01:06.194 ******* 2026-02-13 03:11:25.079295 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:25.079307 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:25.079320 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:25.079332 | orchestrator | 2026-02-13 03:11:25.079344 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-13 03:11:25.079357 | orchestrator | Friday 13 February 2026 03:11:20 +0000 (0:00:00.310) 0:01:06.505 ******* 2026-02-13 03:11:25.079370 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:25.079382 | orchestrator | 2026-02-13 03:11:25.079395 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-13 03:11:25.079407 | orchestrator | Friday 13 February 2026 03:11:21 +0000 (0:00:00.629) 0:01:07.134 ******* 2026-02-13 03:11:25.079442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 03:11:25.079464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 03:11:25.079478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 03:11:25.079491 | orchestrator | 2026-02-13 03:11:25.079504 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-13 03:11:25.079518 | orchestrator | Friday 13 February 2026 03:11:23 +0000 (0:00:02.618) 0:01:09.753 ******* 2026-02-13 03:11:25.079539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 03:11:25.079553 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:25.079566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 03:11:25.079580 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:25.079602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 03:11:32.598550 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:32.598682 | orchestrator | 2026-02-13 03:11:32.598766 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-13 03:11:32.598790 | orchestrator | Friday 13 February 2026 03:11:25 +0000 (0:00:01.313) 0:01:11.067 ******* 2026-02-13 03:11:32.598846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.598872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.598893 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:32.598913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.598963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.598976 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:32.598988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.598999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 03:11:32.599010 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:32.599020 | orchestrator | 2026-02-13 03:11:32.599037 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-13 03:11:32.599056 | orchestrator | Friday 13 February 2026 03:11:26 +0000 (0:00:01.594) 0:01:12.662 ******* 2026-02-13 03:11:32.599073 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:32.599091 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:32.599110 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:32.599128 | orchestrator | 2026-02-13 03:11:32.599152 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-13 03:11:32.599171 | orchestrator | Friday 13 February 2026 03:11:27 +0000 (0:00:00.424) 0:01:13.086 ******* 2026-02-13 03:11:32.599191 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:32.599209 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:32.599227 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:32.599245 | orchestrator | 2026-02-13 03:11:32.599263 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-13 03:11:32.599282 | orchestrator | Friday 13 February 2026 03:11:28 +0000 (0:00:01.287) 0:01:14.374 ******* 2026-02-13 03:11:32.599300 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:32.599318 | orchestrator | 2026-02-13 03:11:32.599337 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-13 03:11:32.599354 | orchestrator | Friday 13 February 2026 03:11:29 +0000 (0:00:00.928) 0:01:15.302 ******* 2026-02-13 03:11:32.599412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:32.599456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:32.599478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:32.599498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:32.599518 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:32.599549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 03:11:33.228866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228934 | orchestrator | 2026-02-13 03:11:33.228948 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-13 03:11:33.228960 | orchestrator | Friday 13 February 2026 03:11:32 +0000 (0:00:03.356) 0:01:18.658 ******* 2026-02-13 03:11:33.228973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 03:11:33.228985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.228997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.229008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:33.229020 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:33.229046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 03:11:39.289513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289628 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:39.289639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 03:11:39.289648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 03:11:39.289773 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:39.289781 | orchestrator | 2026-02-13 03:11:39.289791 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-13 03:11:39.289801 | orchestrator | Friday 13 February 2026 03:11:33 +0000 (0:00:00.663) 0:01:19.322 ******* 2026-02-13 03:11:39.289810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289829 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:39.289837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289853 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:39.289861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-13 03:11:39.289877 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:39.289885 | orchestrator | 2026-02-13 03:11:39.289893 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-13 03:11:39.289901 | orchestrator | Friday 13 February 2026 03:11:34 +0000 (0:00:01.128) 0:01:20.451 ******* 2026-02-13 03:11:39.289909 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:39.289925 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:39.289933 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:39.289941 | orchestrator | 2026-02-13 03:11:39.289949 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-13 03:11:39.289957 | orchestrator | Friday 13 February 2026 03:11:35 +0000 (0:00:01.265) 0:01:21.716 ******* 2026-02-13 03:11:39.289965 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:39.289973 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:39.289981 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:39.289994 | orchestrator | 2026-02-13 03:11:39.290007 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-13 03:11:39.290076 | orchestrator | Friday 13 February 2026 03:11:37 +0000 (0:00:01.954) 0:01:23.671 ******* 2026-02-13 03:11:39.290092 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:39.290103 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:39.290111 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:39.290120 | orchestrator | 2026-02-13 03:11:39.290129 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-13 03:11:39.290138 | orchestrator | Friday 13 February 2026 03:11:37 +0000 (0:00:00.301) 0:01:23.972 ******* 2026-02-13 03:11:39.290147 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:39.290156 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:39.290165 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:39.290174 | orchestrator | 2026-02-13 03:11:39.290183 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-13 03:11:39.290192 | orchestrator | Friday 13 February 2026 03:11:38 +0000 (0:00:00.303) 0:01:24.276 ******* 2026-02-13 03:11:39.290202 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:39.290211 | orchestrator | 2026-02-13 03:11:39.290220 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-13 03:11:39.290239 | orchestrator | Friday 13 February 2026 03:11:39 +0000 (0:00:00.996) 0:01:25.273 ******* 2026-02-13 03:11:42.479566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 03:11:42.479675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:42.479722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 03:11:42.479863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:42.479886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:42.479962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 03:11:43.315667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:43.315688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.315937 | orchestrator | 2026-02-13 03:11:43.315958 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-13 03:11:43.315977 | orchestrator | Friday 13 February 2026 03:11:42 +0000 (0:00:03.456) 0:01:28.730 ******* 2026-02-13 03:11:43.315997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 03:11:43.316018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:43.316037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.316057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.316080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.770946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771102 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:43.771126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 03:11:43.771147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:43.771789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.771952 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:43.771975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 03:11:43.771994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 03:11:43.772011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 03:11:43.772040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 03:11:53.433203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 03:11:53.433351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:11:53.433369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 03:11:53.433380 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:53.433392 | orchestrator | 2026-02-13 03:11:53.433402 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-13 03:11:53.433413 | orchestrator | Friday 13 February 2026 03:11:43 +0000 (0:00:01.029) 0:01:29.759 ******* 2026-02-13 03:11:53.433422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433444 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:53.433452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433470 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:53.433479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-13 03:11:53.433520 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:53.433529 | orchestrator | 2026-02-13 03:11:53.433538 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-13 03:11:53.433547 | orchestrator | Friday 13 February 2026 03:11:44 +0000 (0:00:01.224) 0:01:30.983 ******* 2026-02-13 03:11:53.433556 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:53.433565 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:53.433574 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:53.433583 | orchestrator | 2026-02-13 03:11:53.433592 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-13 03:11:53.433601 | orchestrator | Friday 13 February 2026 03:11:46 +0000 (0:00:01.340) 0:01:32.323 ******* 2026-02-13 03:11:53.433609 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:11:53.433618 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:11:53.433627 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:11:53.433636 | orchestrator | 2026-02-13 03:11:53.433644 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-13 03:11:53.433653 | orchestrator | Friday 13 February 2026 03:11:48 +0000 (0:00:02.023) 0:01:34.346 ******* 2026-02-13 03:11:53.433679 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:53.433719 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:53.433731 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:53.433741 | orchestrator | 2026-02-13 03:11:53.433752 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-13 03:11:53.433762 | orchestrator | Friday 13 February 2026 03:11:48 +0000 (0:00:00.311) 0:01:34.658 ******* 2026-02-13 03:11:53.433776 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:11:53.433791 | orchestrator | 2026-02-13 03:11:53.433805 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-13 03:11:53.433819 | orchestrator | Friday 13 February 2026 03:11:49 +0000 (0:00:01.050) 0:01:35.708 ******* 2026-02-13 03:11:53.433846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 03:11:53.433866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:53.433912 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 03:11:56.281776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:56.281928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 03:11:56.281970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:56.281995 | orchestrator | 2026-02-13 03:11:56.282009 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-13 03:11:56.282098 | orchestrator | Friday 13 February 2026 03:11:53 +0000 (0:00:03.834) 0:01:39.543 ******* 2026-02-13 03:11:56.282119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 03:11:56.282143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:59.845087 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:11:59.845205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 03:11:59.845245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:59.845286 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:11:59.845321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 03:11:59.845340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 03:11:59.845362 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:11:59.845374 | orchestrator | 2026-02-13 03:11:59.845385 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-13 03:11:59.845397 | orchestrator | Friday 13 February 2026 03:11:56 +0000 (0:00:02.819) 0:01:42.362 ******* 2026-02-13 03:11:59.845409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:11:59.845432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:12:07.919590 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:07.919738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:12:07.919758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:12:07.919770 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:07.919781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:12:07.919807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 03:12:07.919818 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:07.919829 | orchestrator | 2026-02-13 03:12:07.919841 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-13 03:12:07.919853 | orchestrator | Friday 13 February 2026 03:11:59 +0000 (0:00:03.468) 0:01:45.830 ******* 2026-02-13 03:12:07.919886 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:07.919896 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:07.919906 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:07.919916 | orchestrator | 2026-02-13 03:12:07.919925 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-13 03:12:07.919935 | orchestrator | Friday 13 February 2026 03:12:01 +0000 (0:00:01.342) 0:01:47.173 ******* 2026-02-13 03:12:07.919945 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:07.919954 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:07.919964 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:07.919973 | orchestrator | 2026-02-13 03:12:07.919983 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-13 03:12:07.919993 | orchestrator | Friday 13 February 2026 03:12:03 +0000 (0:00:01.989) 0:01:49.162 ******* 2026-02-13 03:12:07.920002 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:07.920012 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:07.920021 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:07.920031 | orchestrator | 2026-02-13 03:12:07.920040 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-13 03:12:07.920050 | orchestrator | Friday 13 February 2026 03:12:03 +0000 (0:00:00.298) 0:01:49.460 ******* 2026-02-13 03:12:07.920060 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:07.920069 | orchestrator | 2026-02-13 03:12:07.920079 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-13 03:12:07.920088 | orchestrator | Friday 13 February 2026 03:12:04 +0000 (0:00:00.992) 0:01:50.453 ******* 2026-02-13 03:12:07.920116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 03:12:07.920130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 03:12:07.920141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 03:12:07.920151 | orchestrator | 2026-02-13 03:12:07.920161 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-13 03:12:07.920179 | orchestrator | Friday 13 February 2026 03:12:07 +0000 (0:00:02.861) 0:01:53.315 ******* 2026-02-13 03:12:07.920190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 03:12:07.920201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 03:12:07.920211 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:07.920221 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:07.920232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 03:12:07.920303 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:07.920320 | orchestrator | 2026-02-13 03:12:07.920330 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-13 03:12:07.920340 | orchestrator | Friday 13 February 2026 03:12:07 +0000 (0:00:00.404) 0:01:53.719 ******* 2026-02-13 03:12:07.920350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:07.920370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:16.161581 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:16.161744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:16.161772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:16.161787 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:16.161800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:16.161811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-13 03:12:16.161848 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:16.161860 | orchestrator | 2026-02-13 03:12:16.161872 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-13 03:12:16.161884 | orchestrator | Friday 13 February 2026 03:12:08 +0000 (0:00:00.827) 0:01:54.546 ******* 2026-02-13 03:12:16.161895 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:16.161906 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:16.161917 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:16.161927 | orchestrator | 2026-02-13 03:12:16.161938 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-13 03:12:16.161949 | orchestrator | Friday 13 February 2026 03:12:09 +0000 (0:00:01.305) 0:01:55.852 ******* 2026-02-13 03:12:16.161960 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:16.161971 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:16.161981 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:16.161992 | orchestrator | 2026-02-13 03:12:16.162003 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-13 03:12:16.162079 | orchestrator | Friday 13 February 2026 03:12:11 +0000 (0:00:01.908) 0:01:57.761 ******* 2026-02-13 03:12:16.162093 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:16.162104 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:16.162116 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:16.162128 | orchestrator | 2026-02-13 03:12:16.162141 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-13 03:12:16.162153 | orchestrator | Friday 13 February 2026 03:12:12 +0000 (0:00:00.305) 0:01:58.066 ******* 2026-02-13 03:12:16.162166 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:16.162178 | orchestrator | 2026-02-13 03:12:16.162191 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-13 03:12:16.162204 | orchestrator | Friday 13 February 2026 03:12:13 +0000 (0:00:01.048) 0:01:59.115 ******* 2026-02-13 03:12:16.162244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 03:12:16.162281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 03:12:16.162306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 03:12:17.687817 | orchestrator | 2026-02-13 03:12:17.687922 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-13 03:12:17.687940 | orchestrator | Friday 13 February 2026 03:12:16 +0000 (0:00:03.035) 0:02:02.150 ******* 2026-02-13 03:12:17.687977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 03:12:17.687993 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:17.688027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 03:12:17.688074 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:17.688094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 03:12:17.688107 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:17.688118 | orchestrator | 2026-02-13 03:12:17.688129 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-13 03:12:17.688140 | orchestrator | Friday 13 February 2026 03:12:16 +0000 (0:00:00.619) 0:02:02.770 ******* 2026-02-13 03:12:17.688153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:17.688176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:17.688190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:17.688211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:26.190146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 03:12:26.190261 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:26.190282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:26.190298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:26.190330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:26.190344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:26.190357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 03:12:26.190368 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:26.190379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:26.190391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:26.190402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-13 03:12:26.190437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 03:12:26.190449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 03:12:26.190460 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:26.190471 | orchestrator | 2026-02-13 03:12:26.190483 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-13 03:12:26.190496 | orchestrator | Friday 13 February 2026 03:12:17 +0000 (0:00:00.906) 0:02:03.676 ******* 2026-02-13 03:12:26.190507 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:26.190517 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:26.190528 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:26.190539 | orchestrator | 2026-02-13 03:12:26.190550 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-13 03:12:26.190560 | orchestrator | Friday 13 February 2026 03:12:19 +0000 (0:00:01.651) 0:02:05.328 ******* 2026-02-13 03:12:26.190572 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:26.190583 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:26.190593 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:26.190604 | orchestrator | 2026-02-13 03:12:26.190615 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-13 03:12:26.190626 | orchestrator | Friday 13 February 2026 03:12:21 +0000 (0:00:01.949) 0:02:07.278 ******* 2026-02-13 03:12:26.190639 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:26.190652 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:26.190744 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:26.190758 | orchestrator | 2026-02-13 03:12:26.190770 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-13 03:12:26.190783 | orchestrator | Friday 13 February 2026 03:12:21 +0000 (0:00:00.307) 0:02:07.586 ******* 2026-02-13 03:12:26.190796 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:26.190808 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:26.190821 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:26.190859 | orchestrator | 2026-02-13 03:12:26.190879 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-13 03:12:26.190898 | orchestrator | Friday 13 February 2026 03:12:21 +0000 (0:00:00.296) 0:02:07.882 ******* 2026-02-13 03:12:26.190930 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:26.190948 | orchestrator | 2026-02-13 03:12:26.190966 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-13 03:12:26.190984 | orchestrator | Friday 13 February 2026 03:12:23 +0000 (0:00:01.137) 0:02:09.020 ******* 2026-02-13 03:12:26.191017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:12:26.191053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:26.191072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:26.191092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:12:26.191127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:26.749802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:26.749909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:12:26.749951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:26.749964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:26.749977 | orchestrator | 2026-02-13 03:12:26.749991 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-13 03:12:26.750003 | orchestrator | Friday 13 February 2026 03:12:26 +0000 (0:00:03.154) 0:02:12.174 ******* 2026-02-13 03:12:26.750068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:12:26.750092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:26.750105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:26.750125 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:26.750138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:12:26.750150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:26.750162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:26.750173 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:26.750197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:12:35.809157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:12:35.809269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:12:35.809287 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:35.809302 | orchestrator | 2026-02-13 03:12:35.809315 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-13 03:12:35.809327 | orchestrator | Friday 13 February 2026 03:12:26 +0000 (0:00:00.558) 0:02:12.732 ******* 2026-02-13 03:12:35.809340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809367 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:35.809379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809402 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:35.809413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-13 03:12:35.809436 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:35.809447 | orchestrator | 2026-02-13 03:12:35.809458 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-13 03:12:35.809469 | orchestrator | Friday 13 February 2026 03:12:27 +0000 (0:00:01.000) 0:02:13.733 ******* 2026-02-13 03:12:35.809481 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:35.809492 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:35.809530 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:35.809542 | orchestrator | 2026-02-13 03:12:35.809553 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-13 03:12:35.809564 | orchestrator | Friday 13 February 2026 03:12:29 +0000 (0:00:01.342) 0:02:15.076 ******* 2026-02-13 03:12:35.809575 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:35.809586 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:35.809597 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:35.809607 | orchestrator | 2026-02-13 03:12:35.809618 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-13 03:12:35.809629 | orchestrator | Friday 13 February 2026 03:12:31 +0000 (0:00:01.965) 0:02:17.041 ******* 2026-02-13 03:12:35.809640 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:35.809693 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:35.809706 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:35.809716 | orchestrator | 2026-02-13 03:12:35.809727 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-13 03:12:35.809755 | orchestrator | Friday 13 February 2026 03:12:31 +0000 (0:00:00.320) 0:02:17.362 ******* 2026-02-13 03:12:35.809767 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:35.809778 | orchestrator | 2026-02-13 03:12:35.809789 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-13 03:12:35.809799 | orchestrator | Friday 13 February 2026 03:12:32 +0000 (0:00:01.220) 0:02:18.582 ******* 2026-02-13 03:12:35.809812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 03:12:35.809827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:35.809839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 03:12:35.809859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:35.809880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 03:12:40.811198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:40.811266 | orchestrator | 2026-02-13 03:12:40.811272 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-13 03:12:40.811278 | orchestrator | Friday 13 February 2026 03:12:35 +0000 (0:00:03.208) 0:02:21.790 ******* 2026-02-13 03:12:40.811284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 03:12:40.811318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:40.811338 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:40.811346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 03:12:40.811362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:40.811366 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:40.811370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 03:12:40.811374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:12:40.811382 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:40.811386 | orchestrator | 2026-02-13 03:12:40.811390 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-13 03:12:40.811394 | orchestrator | Friday 13 February 2026 03:12:36 +0000 (0:00:00.611) 0:02:22.402 ******* 2026-02-13 03:12:40.811399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811404 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811409 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:40.811413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811420 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:40.811424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-13 03:12:40.811432 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:40.811435 | orchestrator | 2026-02-13 03:12:40.811441 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-13 03:12:40.811445 | orchestrator | Friday 13 February 2026 03:12:37 +0000 (0:00:00.834) 0:02:23.236 ******* 2026-02-13 03:12:40.811449 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:40.811453 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:40.811456 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:40.811460 | orchestrator | 2026-02-13 03:12:40.811464 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-13 03:12:40.811468 | orchestrator | Friday 13 February 2026 03:12:38 +0000 (0:00:01.588) 0:02:24.825 ******* 2026-02-13 03:12:40.811471 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:40.811475 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:40.811479 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:40.811483 | orchestrator | 2026-02-13 03:12:40.811487 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-13 03:12:40.811493 | orchestrator | Friday 13 February 2026 03:12:40 +0000 (0:00:01.969) 0:02:26.795 ******* 2026-02-13 03:12:45.122282 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:45.122408 | orchestrator | 2026-02-13 03:12:45.122432 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-13 03:12:45.122449 | orchestrator | Friday 13 February 2026 03:12:41 +0000 (0:00:01.001) 0:02:27.796 ******* 2026-02-13 03:12:45.122467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 03:12:45.122515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 03:12:45.122622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 03:12:45.122736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:45.122786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019753 | orchestrator | 2026-02-13 03:12:46.019822 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-13 03:12:46.019829 | orchestrator | Friday 13 February 2026 03:12:45 +0000 (0:00:03.394) 0:02:31.190 ******* 2026-02-13 03:12:46.019852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 03:12:46.019858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019873 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:46.019889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 03:12:46.019904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019922 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:46.019926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 03:12:46.019930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 03:12:46.019946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 03:12:56.494776 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:56.494895 | orchestrator | 2026-02-13 03:12:56.494910 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-13 03:12:56.494922 | orchestrator | Friday 13 February 2026 03:12:46 +0000 (0:00:00.903) 0:02:32.093 ******* 2026-02-13 03:12:56.494933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.494945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.494957 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:56.494968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.494978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.494988 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:56.494997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.495007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-13 03:12:56.495017 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:56.495027 | orchestrator | 2026-02-13 03:12:56.495037 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-13 03:12:56.495046 | orchestrator | Friday 13 February 2026 03:12:46 +0000 (0:00:00.816) 0:02:32.910 ******* 2026-02-13 03:12:56.495056 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:56.495066 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:56.495075 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:56.495085 | orchestrator | 2026-02-13 03:12:56.495095 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-13 03:12:56.495105 | orchestrator | Friday 13 February 2026 03:12:48 +0000 (0:00:01.212) 0:02:34.122 ******* 2026-02-13 03:12:56.495114 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:12:56.495124 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:12:56.495134 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:12:56.495143 | orchestrator | 2026-02-13 03:12:56.495153 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-13 03:12:56.495163 | orchestrator | Friday 13 February 2026 03:12:50 +0000 (0:00:01.996) 0:02:36.119 ******* 2026-02-13 03:12:56.495172 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:12:56.495182 | orchestrator | 2026-02-13 03:12:56.495192 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-13 03:12:56.495201 | orchestrator | Friday 13 February 2026 03:12:51 +0000 (0:00:01.286) 0:02:37.406 ******* 2026-02-13 03:12:56.495211 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 03:12:56.495221 | orchestrator | 2026-02-13 03:12:56.495231 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-13 03:12:56.495263 | orchestrator | Friday 13 February 2026 03:12:54 +0000 (0:00:02.911) 0:02:40.317 ******* 2026-02-13 03:12:56.495312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:56.495329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:12:56.495342 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:56.495361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:56.495383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:12:56.495394 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:56.495414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:58.720335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:12:58.720430 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:12:58.720442 | orchestrator | 2026-02-13 03:12:58.720451 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-13 03:12:58.720460 | orchestrator | Friday 13 February 2026 03:12:56 +0000 (0:00:02.158) 0:02:42.476 ******* 2026-02-13 03:12:58.720509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:58.720520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:12:58.720529 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:12:58.720553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:58.720579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:12:58.720586 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:12:58.720595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:12:58.720609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 03:13:08.053231 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:08.053342 | orchestrator | 2026-02-13 03:13:08.053358 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-13 03:13:08.053371 | orchestrator | Friday 13 February 2026 03:12:58 +0000 (0:00:02.231) 0:02:44.708 ******* 2026-02-13 03:13:08.053385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053450 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:08.053462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053509 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:08.053531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 03:13:08.053568 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:08.053587 | orchestrator | 2026-02-13 03:13:08.053744 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-13 03:13:08.053762 | orchestrator | Friday 13 February 2026 03:13:01 +0000 (0:00:02.724) 0:02:47.432 ******* 2026-02-13 03:13:08.053775 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:08.053822 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:08.053837 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:08.053850 | orchestrator | 2026-02-13 03:13:08.053863 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-13 03:13:08.053876 | orchestrator | Friday 13 February 2026 03:13:03 +0000 (0:00:02.059) 0:02:49.492 ******* 2026-02-13 03:13:08.053890 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:08.053903 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:08.053916 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:08.053929 | orchestrator | 2026-02-13 03:13:08.053942 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-13 03:13:08.053955 | orchestrator | Friday 13 February 2026 03:13:04 +0000 (0:00:01.346) 0:02:50.839 ******* 2026-02-13 03:13:08.053968 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:08.053981 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:08.053994 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:08.054007 | orchestrator | 2026-02-13 03:13:08.054080 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-13 03:13:08.054094 | orchestrator | Friday 13 February 2026 03:13:05 +0000 (0:00:00.311) 0:02:51.150 ******* 2026-02-13 03:13:08.054107 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:13:08.054121 | orchestrator | 2026-02-13 03:13:08.054134 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-13 03:13:08.054147 | orchestrator | Friday 13 February 2026 03:13:06 +0000 (0:00:01.251) 0:02:52.402 ******* 2026-02-13 03:13:08.054167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 03:13:08.054184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 03:13:08.054202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 03:13:08.054221 | orchestrator | 2026-02-13 03:13:08.054239 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-13 03:13:08.054269 | orchestrator | Friday 13 February 2026 03:13:07 +0000 (0:00:01.430) 0:02:53.832 ******* 2026-02-13 03:13:08.054299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 03:13:15.967157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 03:13:15.967270 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:15.967288 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:15.967300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 03:13:15.967312 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:15.967324 | orchestrator | 2026-02-13 03:13:15.967337 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-13 03:13:15.967350 | orchestrator | Friday 13 February 2026 03:13:08 +0000 (0:00:00.388) 0:02:54.220 ******* 2026-02-13 03:13:15.967362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 03:13:15.967376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 03:13:15.967388 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:15.967399 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:15.967410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 03:13:15.967444 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:15.967456 | orchestrator | 2026-02-13 03:13:15.967509 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-13 03:13:15.967522 | orchestrator | Friday 13 February 2026 03:13:09 +0000 (0:00:00.823) 0:02:55.044 ******* 2026-02-13 03:13:15.967533 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:15.967544 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:15.967556 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:15.967567 | orchestrator | 2026-02-13 03:13:15.967579 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-13 03:13:15.967591 | orchestrator | Friday 13 February 2026 03:13:09 +0000 (0:00:00.430) 0:02:55.474 ******* 2026-02-13 03:13:15.967603 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:15.967614 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:15.967624 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:15.967661 | orchestrator | 2026-02-13 03:13:15.967672 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-13 03:13:15.967683 | orchestrator | Friday 13 February 2026 03:13:10 +0000 (0:00:01.180) 0:02:56.655 ******* 2026-02-13 03:13:15.967694 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:15.967706 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:15.967718 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:15.967729 | orchestrator | 2026-02-13 03:13:15.967740 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-13 03:13:15.967751 | orchestrator | Friday 13 February 2026 03:13:10 +0000 (0:00:00.311) 0:02:56.966 ******* 2026-02-13 03:13:15.967763 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:13:15.967774 | orchestrator | 2026-02-13 03:13:15.967785 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-13 03:13:15.967796 | orchestrator | Friday 13 February 2026 03:13:12 +0000 (0:00:01.388) 0:02:58.355 ******* 2026-02-13 03:13:15.967832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:13:15.967854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:15.967868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:15.967894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:15.967907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:15.967930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.079617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.079814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.079835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.079868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:13:16.079882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:16.079895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.079926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.079946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:16.079967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.079979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.079990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.080001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.080020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:16.186617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:16.186835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.186854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:16.186867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.186880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.186893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.186934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:13:16.186955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:16.186967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.186979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.186991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:16.187014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.448115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:16.448160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:16.448365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:16.448382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.448407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:16.448419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:16.448447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:16.448477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:17.478131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.478262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.478289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:17.478315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.478336 | orchestrator | 2026-02-13 03:13:17.478359 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-13 03:13:17.478414 | orchestrator | Friday 13 February 2026 03:13:16 +0000 (0:00:04.083) 0:03:02.439 ******* 2026-02-13 03:13:17.478457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:13:17.478508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.478533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.478559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.478584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:17.478665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.478707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:13:17.561167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.561263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.561291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.561393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:17.561423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:17.561439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.561450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.561468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.643234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.643324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.643337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:17.643368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.643390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:13:17.643416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.643425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.643433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.643447 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:17.643456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.643464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.643472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:17.643485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.871462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.871583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-13 03:13:17.871615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.871628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.871658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:17.871668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.871689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.871702 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:17.871710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:17.871717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.871728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:17.871735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:17.871746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-13 03:13:17.871765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-13 03:13:28.119795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 03:13:28.119924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 03:13:28.119955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:13:28.119966 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:28.119977 | orchestrator | 2026-02-13 03:13:28.119987 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-13 03:13:28.119997 | orchestrator | Friday 13 February 2026 03:13:17 +0000 (0:00:01.421) 0:03:03.861 ******* 2026-02-13 03:13:28.120007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120028 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:28.120037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120054 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:28.120063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-13 03:13:28.120088 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:28.120097 | orchestrator | 2026-02-13 03:13:28.120106 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-13 03:13:28.120115 | orchestrator | Friday 13 February 2026 03:13:19 +0000 (0:00:01.891) 0:03:05.752 ******* 2026-02-13 03:13:28.120124 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:28.120133 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:28.120159 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:28.120168 | orchestrator | 2026-02-13 03:13:28.120177 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-13 03:13:28.120186 | orchestrator | Friday 13 February 2026 03:13:21 +0000 (0:00:01.284) 0:03:07.037 ******* 2026-02-13 03:13:28.120195 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:28.120204 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:28.120213 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:28.120222 | orchestrator | 2026-02-13 03:13:28.120230 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-13 03:13:28.120239 | orchestrator | Friday 13 February 2026 03:13:23 +0000 (0:00:02.039) 0:03:09.076 ******* 2026-02-13 03:13:28.120248 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:13:28.120256 | orchestrator | 2026-02-13 03:13:28.120265 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-13 03:13:28.120274 | orchestrator | Friday 13 February 2026 03:13:24 +0000 (0:00:01.170) 0:03:10.247 ******* 2026-02-13 03:13:28.120284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:28.120300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:28.120310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:28.120325 | orchestrator | 2026-02-13 03:13:28.120335 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-13 03:13:28.120346 | orchestrator | Friday 13 February 2026 03:13:27 +0000 (0:00:03.340) 0:03:13.587 ******* 2026-02-13 03:13:28.120364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:13:37.948410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:37.948518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:13:37.948535 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:37.948563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:13:37.948574 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:37.948585 | orchestrator | 2026-02-13 03:13:37.948596 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-13 03:13:37.948607 | orchestrator | Friday 13 February 2026 03:13:28 +0000 (0:00:00.518) 0:03:14.106 ******* 2026-02-13 03:13:37.948618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948741 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:37.948751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948771 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:37.948781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-13 03:13:37.948800 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:37.948810 | orchestrator | 2026-02-13 03:13:37.948820 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-13 03:13:37.948830 | orchestrator | Friday 13 February 2026 03:13:28 +0000 (0:00:00.779) 0:03:14.886 ******* 2026-02-13 03:13:37.948840 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:37.948849 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:37.948859 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:37.948868 | orchestrator | 2026-02-13 03:13:37.948878 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-13 03:13:37.948888 | orchestrator | Friday 13 February 2026 03:13:30 +0000 (0:00:01.809) 0:03:16.696 ******* 2026-02-13 03:13:37.948897 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:37.948907 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:37.948933 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:37.948944 | orchestrator | 2026-02-13 03:13:37.948956 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-13 03:13:37.948968 | orchestrator | Friday 13 February 2026 03:13:32 +0000 (0:00:01.790) 0:03:18.486 ******* 2026-02-13 03:13:37.948979 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:13:37.948990 | orchestrator | 2026-02-13 03:13:37.949001 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-13 03:13:37.949012 | orchestrator | Friday 13 February 2026 03:13:33 +0000 (0:00:01.491) 0:03:19.978 ******* 2026-02-13 03:13:37.949026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:37.949055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:37.949069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:37.949089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:39.100255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 03:13:39.100424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100444 | orchestrator | 2026-02-13 03:13:39.100456 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-13 03:13:39.100467 | orchestrator | Friday 13 February 2026 03:13:37 +0000 (0:00:03.959) 0:03:23.937 ******* 2026-02-13 03:13:39.100496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 03:13:39.100514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:39.100539 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:39.100551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 03:13:39.100568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:49.364916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:49.365030 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:49.365067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 03:13:49.365108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 03:13:49.365121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 03:13:49.365132 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:49.365143 | orchestrator | 2026-02-13 03:13:49.365156 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-13 03:13:49.365168 | orchestrator | Friday 13 February 2026 03:13:39 +0000 (0:00:01.148) 0:03:25.086 ******* 2026-02-13 03:13:49.365181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365208 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365251 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:13:49.365262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365314 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:13:49.365325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-13 03:13:49.365374 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:13:49.365384 | orchestrator | 2026-02-13 03:13:49.365396 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-13 03:13:49.365407 | orchestrator | Friday 13 February 2026 03:13:39 +0000 (0:00:00.865) 0:03:25.952 ******* 2026-02-13 03:13:49.365418 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:49.365428 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:49.365440 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:49.365452 | orchestrator | 2026-02-13 03:13:49.365464 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-13 03:13:49.365477 | orchestrator | Friday 13 February 2026 03:13:41 +0000 (0:00:01.352) 0:03:27.304 ******* 2026-02-13 03:13:49.365489 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:13:49.365501 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:13:49.365513 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:13:49.365525 | orchestrator | 2026-02-13 03:13:49.365540 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-13 03:13:49.365559 | orchestrator | Friday 13 February 2026 03:13:43 +0000 (0:00:01.997) 0:03:29.301 ******* 2026-02-13 03:13:49.365577 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:13:49.365596 | orchestrator | 2026-02-13 03:13:49.365616 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-13 03:13:49.365668 | orchestrator | Friday 13 February 2026 03:13:44 +0000 (0:00:01.505) 0:03:30.806 ******* 2026-02-13 03:13:49.365683 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-13 03:13:49.365697 | orchestrator | 2026-02-13 03:13:49.365710 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-13 03:13:49.365722 | orchestrator | Friday 13 February 2026 03:13:45 +0000 (0:00:00.802) 0:03:31.609 ******* 2026-02-13 03:13:49.365736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 03:13:49.365768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 03:14:00.569525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 03:14:00.569668 | orchestrator | 2026-02-13 03:14:00.569684 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-13 03:14:00.569695 | orchestrator | Friday 13 February 2026 03:13:49 +0000 (0:00:03.744) 0:03:35.354 ******* 2026-02-13 03:14:00.569705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.569715 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:00.569739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.569748 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:00.569757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.569765 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:00.569773 | orchestrator | 2026-02-13 03:14:00.569782 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-13 03:14:00.569791 | orchestrator | Friday 13 February 2026 03:13:50 +0000 (0:00:01.301) 0:03:36.655 ******* 2026-02-13 03:14:00.569800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569840 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:00.569849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569866 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:00.569874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 03:14:00.569904 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:00.569913 | orchestrator | 2026-02-13 03:14:00.569921 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 03:14:00.569929 | orchestrator | Friday 13 February 2026 03:13:52 +0000 (0:00:01.511) 0:03:38.167 ******* 2026-02-13 03:14:00.569937 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:14:00.569945 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:14:00.569953 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:14:00.569960 | orchestrator | 2026-02-13 03:14:00.569968 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 03:14:00.569976 | orchestrator | Friday 13 February 2026 03:13:54 +0000 (0:00:02.323) 0:03:40.490 ******* 2026-02-13 03:14:00.569984 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:14:00.569992 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:14:00.569999 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:14:00.570007 | orchestrator | 2026-02-13 03:14:00.570069 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-13 03:14:00.570082 | orchestrator | Friday 13 February 2026 03:13:57 +0000 (0:00:02.703) 0:03:43.193 ******* 2026-02-13 03:14:00.570092 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-13 03:14:00.570103 | orchestrator | 2026-02-13 03:14:00.570113 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-13 03:14:00.570122 | orchestrator | Friday 13 February 2026 03:13:58 +0000 (0:00:01.061) 0:03:44.254 ******* 2026-02-13 03:14:00.570137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.570148 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:00.570158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.570176 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:00.570185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.570195 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:00.570205 | orchestrator | 2026-02-13 03:14:00.570214 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-13 03:14:00.570224 | orchestrator | Friday 13 February 2026 03:13:59 +0000 (0:00:01.023) 0:03:45.277 ******* 2026-02-13 03:14:00.570233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.570242 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:00.570252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:00.570268 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:22.470850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 03:14:22.470960 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:22.470991 | orchestrator | 2026-02-13 03:14:22.471004 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-13 03:14:22.471017 | orchestrator | Friday 13 February 2026 03:14:00 +0000 (0:00:01.273) 0:03:46.551 ******* 2026-02-13 03:14:22.471029 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:22.471040 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:22.471051 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:22.471061 | orchestrator | 2026-02-13 03:14:22.471073 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 03:14:22.471084 | orchestrator | Friday 13 February 2026 03:14:02 +0000 (0:00:01.497) 0:03:48.048 ******* 2026-02-13 03:14:22.471094 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:14:22.471106 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:14:22.471117 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:14:22.471128 | orchestrator | 2026-02-13 03:14:22.471139 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 03:14:22.471150 | orchestrator | Friday 13 February 2026 03:14:04 +0000 (0:00:02.577) 0:03:50.626 ******* 2026-02-13 03:14:22.471180 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:14:22.471191 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:14:22.471202 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:14:22.471213 | orchestrator | 2026-02-13 03:14:22.471231 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-13 03:14:22.471243 | orchestrator | Friday 13 February 2026 03:14:07 +0000 (0:00:02.594) 0:03:53.221 ******* 2026-02-13 03:14:22.471254 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-13 03:14:22.471266 | orchestrator | 2026-02-13 03:14:22.471278 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-13 03:14:22.471288 | orchestrator | Friday 13 February 2026 03:14:08 +0000 (0:00:01.127) 0:03:54.348 ******* 2026-02-13 03:14:22.471300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471311 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:22.471323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471334 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:22.471345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471357 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:22.471367 | orchestrator | 2026-02-13 03:14:22.471379 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-13 03:14:22.471392 | orchestrator | Friday 13 February 2026 03:14:09 +0000 (0:00:01.269) 0:03:55.617 ******* 2026-02-13 03:14:22.471422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471435 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:22.471448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471468 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:22.471482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 03:14:22.471494 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:22.471507 | orchestrator | 2026-02-13 03:14:22.471525 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-13 03:14:22.471538 | orchestrator | Friday 13 February 2026 03:14:10 +0000 (0:00:01.219) 0:03:56.836 ******* 2026-02-13 03:14:22.471550 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:22.471560 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:22.471571 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:22.471582 | orchestrator | 2026-02-13 03:14:22.471592 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 03:14:22.471603 | orchestrator | Friday 13 February 2026 03:14:12 +0000 (0:00:01.675) 0:03:58.512 ******* 2026-02-13 03:14:22.471638 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:14:22.471651 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:14:22.471662 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:14:22.471672 | orchestrator | 2026-02-13 03:14:22.471683 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 03:14:22.471694 | orchestrator | Friday 13 February 2026 03:14:14 +0000 (0:00:02.260) 0:04:00.773 ******* 2026-02-13 03:14:22.471705 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:14:22.471715 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:14:22.471726 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:14:22.471737 | orchestrator | 2026-02-13 03:14:22.471747 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-13 03:14:22.471758 | orchestrator | Friday 13 February 2026 03:14:17 +0000 (0:00:03.052) 0:04:03.825 ******* 2026-02-13 03:14:22.471769 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:14:22.471779 | orchestrator | 2026-02-13 03:14:22.471790 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-13 03:14:22.471801 | orchestrator | Friday 13 February 2026 03:14:19 +0000 (0:00:01.530) 0:04:05.355 ******* 2026-02-13 03:14:22.471814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 03:14:22.471826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:22.471851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:23.161256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 03:14:23.161271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 03:14:23.161321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:23.161353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:23.161366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.161412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:23.161463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:23.161477 | orchestrator | 2026-02-13 03:14:23.161490 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-13 03:14:23.161502 | orchestrator | Friday 13 February 2026 03:14:22 +0000 (0:00:03.216) 0:04:08.572 ******* 2026-02-13 03:14:23.161524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 03:14:23.294597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:23.294796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.294816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.294829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:23.294863 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:23.294878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 03:14:23.294891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:23.294932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.294945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.294957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:23.294975 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:23.294987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 03:14:23.294999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 03:14:23.295010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 03:14:23.295036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 03:14:34.545056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 03:14:34.545206 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:34.545228 | orchestrator | 2026-02-13 03:14:34.545242 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-13 03:14:34.545255 | orchestrator | Friday 13 February 2026 03:14:23 +0000 (0:00:00.711) 0:04:09.284 ******* 2026-02-13 03:14:34.545267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545321 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:34.545333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545355 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:34.545366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 03:14:34.545389 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:34.545409 | orchestrator | 2026-02-13 03:14:34.545428 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-13 03:14:34.545447 | orchestrator | Friday 13 February 2026 03:14:24 +0000 (0:00:00.860) 0:04:10.144 ******* 2026-02-13 03:14:34.545466 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:14:34.545484 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:14:34.545504 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:14:34.545522 | orchestrator | 2026-02-13 03:14:34.545542 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-13 03:14:34.545553 | orchestrator | Friday 13 February 2026 03:14:25 +0000 (0:00:01.654) 0:04:11.799 ******* 2026-02-13 03:14:34.545564 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:14:34.545575 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:14:34.545586 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:14:34.545598 | orchestrator | 2026-02-13 03:14:34.545657 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-13 03:14:34.545671 | orchestrator | Friday 13 February 2026 03:14:27 +0000 (0:00:01.975) 0:04:13.774 ******* 2026-02-13 03:14:34.545682 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:14:34.545694 | orchestrator | 2026-02-13 03:14:34.545705 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-13 03:14:34.545715 | orchestrator | Friday 13 February 2026 03:14:29 +0000 (0:00:01.363) 0:04:15.138 ******* 2026-02-13 03:14:34.545743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:14:34.545788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:14:34.545825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:14:34.545849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:14:34.545879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:14:34.545917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:14:36.476441 | orchestrator | 2026-02-13 03:14:36.476549 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-13 03:14:36.476565 | orchestrator | Friday 13 February 2026 03:14:34 +0000 (0:00:05.385) 0:04:20.523 ******* 2026-02-13 03:14:36.476580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:14:36.476598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:14:36.476696 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:36.476731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:14:36.476745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:14:36.476797 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:36.476810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:14:36.476823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:14:36.476834 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:36.476846 | orchestrator | 2026-02-13 03:14:36.476857 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-13 03:14:36.476869 | orchestrator | Friday 13 February 2026 03:14:35 +0000 (0:00:01.012) 0:04:21.536 ******* 2026-02-13 03:14:36.476881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-13 03:14:36.476894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:36.476908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:36.476928 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:36.476946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-13 03:14:36.476960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:36.476973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:36.476986 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:36.476998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-13 03:14:36.477012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:36.477038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-13 03:14:42.310475 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:42.310588 | orchestrator | 2026-02-13 03:14:42.310701 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-13 03:14:42.310727 | orchestrator | Friday 13 February 2026 03:14:36 +0000 (0:00:00.927) 0:04:22.463 ******* 2026-02-13 03:14:42.310746 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:42.310762 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:42.310776 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:42.310786 | orchestrator | 2026-02-13 03:14:42.310797 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-13 03:14:42.310807 | orchestrator | Friday 13 February 2026 03:14:36 +0000 (0:00:00.403) 0:04:22.867 ******* 2026-02-13 03:14:42.310816 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:42.310826 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:42.310835 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:42.310845 | orchestrator | 2026-02-13 03:14:42.310855 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-13 03:14:42.310864 | orchestrator | Friday 13 February 2026 03:14:38 +0000 (0:00:01.400) 0:04:24.267 ******* 2026-02-13 03:14:42.310874 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:14:42.310884 | orchestrator | 2026-02-13 03:14:42.310894 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-13 03:14:42.310903 | orchestrator | Friday 13 February 2026 03:14:39 +0000 (0:00:01.656) 0:04:25.924 ******* 2026-02-13 03:14:42.310916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 03:14:42.310956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:42.311012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:42.311033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:42.311051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:42.311096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 03:14:42.311118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:42.311135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:42.311163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:42.311176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:42.311194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 03:14:42.311207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:42.311228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:43.835188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 03:14:43.835220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:43.835234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:43.835290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 03:14:43.835313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:43.835330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:43.835354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:43.835374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 03:14:44.514936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:44.515038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.515070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.515083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:44.515095 | orchestrator | 2026-02-13 03:14:44.515109 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-13 03:14:44.515121 | orchestrator | Friday 13 February 2026 03:14:43 +0000 (0:00:04.047) 0:04:29.971 ******* 2026-02-13 03:14:44.515133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-13 03:14:44.515147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:44.515199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.515212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.515224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:44.515244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-13 03:14:44.515258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:44.515270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.515300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.659212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-13 03:14:44.659310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:44.659338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:44.659349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:44.659360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.659370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.659380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:44.659427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-13 03:14:44.659440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:44.659453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.659463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-13 03:14:44.659472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:44.659487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 03:14:44.659502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:46.119235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:46.119349 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:46.119375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:46.119418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 03:14:46.119442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-13 03:14:46.119464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-13 03:14:46.119514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:46.119561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 03:14:46.119583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 03:14:46.119603 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:46.119674 | orchestrator | 2026-02-13 03:14:46.119696 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-13 03:14:46.119715 | orchestrator | Friday 13 February 2026 03:14:44 +0000 (0:00:00.816) 0:04:30.787 ******* 2026-02-13 03:14:46.119746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:46.119806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:46.119821 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:46.119835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:46.119886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:46.119898 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:46.119911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-13 03:14:46.119938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:46.119961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-13 03:14:53.468937 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:53.469055 | orchestrator | 2026-02-13 03:14:53.469074 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-13 03:14:53.469087 | orchestrator | Friday 13 February 2026 03:14:46 +0000 (0:00:01.316) 0:04:32.103 ******* 2026-02-13 03:14:53.469098 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:53.469109 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:53.469120 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:53.469131 | orchestrator | 2026-02-13 03:14:53.469142 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-13 03:14:53.469153 | orchestrator | Friday 13 February 2026 03:14:46 +0000 (0:00:00.427) 0:04:32.530 ******* 2026-02-13 03:14:53.469164 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:53.469175 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:53.469185 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:53.469196 | orchestrator | 2026-02-13 03:14:53.469207 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-13 03:14:53.469218 | orchestrator | Friday 13 February 2026 03:14:47 +0000 (0:00:01.244) 0:04:33.774 ******* 2026-02-13 03:14:53.469229 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:14:53.469239 | orchestrator | 2026-02-13 03:14:53.469250 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-13 03:14:53.469261 | orchestrator | Friday 13 February 2026 03:14:49 +0000 (0:00:01.763) 0:04:35.538 ******* 2026-02-13 03:14:53.469293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:14:53.469349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:14:53.469406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:14:53.469420 | orchestrator | 2026-02-13 03:14:53.469431 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-13 03:14:53.469463 | orchestrator | Friday 13 February 2026 03:14:51 +0000 (0:00:02.105) 0:04:37.643 ******* 2026-02-13 03:14:53.469479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 03:14:53.469509 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:53.469523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 03:14:53.469536 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:53.469548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 03:14:53.469561 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:53.469574 | orchestrator | 2026-02-13 03:14:53.469587 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-13 03:14:53.469600 | orchestrator | Friday 13 February 2026 03:14:52 +0000 (0:00:00.480) 0:04:38.124 ******* 2026-02-13 03:14:53.469645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 03:14:53.469659 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:14:53.469671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 03:14:53.469682 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:14:53.469693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 03:14:53.469704 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:14:53.469714 | orchestrator | 2026-02-13 03:14:53.469725 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-13 03:14:53.469736 | orchestrator | Friday 13 February 2026 03:14:53 +0000 (0:00:00.890) 0:04:39.015 ******* 2026-02-13 03:14:53.469754 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:03.154401 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:03.154514 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:03.154529 | orchestrator | 2026-02-13 03:15:03.154544 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-13 03:15:03.154556 | orchestrator | Friday 13 February 2026 03:14:53 +0000 (0:00:00.446) 0:04:39.462 ******* 2026-02-13 03:15:03.154568 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:03.154602 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:03.154651 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:03.154662 | orchestrator | 2026-02-13 03:15:03.154674 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-13 03:15:03.154685 | orchestrator | Friday 13 February 2026 03:14:54 +0000 (0:00:01.334) 0:04:40.796 ******* 2026-02-13 03:15:03.154696 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:15:03.154708 | orchestrator | 2026-02-13 03:15:03.154719 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-13 03:15:03.154730 | orchestrator | Friday 13 February 2026 03:14:56 +0000 (0:00:01.430) 0:04:42.227 ******* 2026-02-13 03:15:03.154759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 03:15:03.154873 | orchestrator | 2026-02-13 03:15:03.154885 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-13 03:15:03.154897 | orchestrator | Friday 13 February 2026 03:15:02 +0000 (0:00:06.244) 0:04:48.472 ******* 2026-02-13 03:15:03.154908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 03:15:03.154928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 03:15:08.888882 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:08.889013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 03:15:08.889033 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 03:15:08.889046 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:08.889058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 03:15:08.889070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 03:15:08.889104 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:08.889116 | orchestrator | 2026-02-13 03:15:08.889129 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-13 03:15:08.889142 | orchestrator | Friday 13 February 2026 03:15:03 +0000 (0:00:00.671) 0:04:49.143 ******* 2026-02-13 03:15:08.889170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889197 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889225 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:08.889236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889280 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:08.889291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-13 03:15:08.889335 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:08.889346 | orchestrator | 2026-02-13 03:15:08.889366 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-13 03:15:08.889376 | orchestrator | Friday 13 February 2026 03:15:04 +0000 (0:00:00.905) 0:04:50.049 ******* 2026-02-13 03:15:08.889388 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:15:08.889399 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:15:08.889409 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:15:08.889420 | orchestrator | 2026-02-13 03:15:08.889431 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-13 03:15:08.889442 | orchestrator | Friday 13 February 2026 03:15:05 +0000 (0:00:01.299) 0:04:51.349 ******* 2026-02-13 03:15:08.889452 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:15:08.889463 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:15:08.889474 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:15:08.889484 | orchestrator | 2026-02-13 03:15:08.889496 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-13 03:15:08.889507 | orchestrator | Friday 13 February 2026 03:15:07 +0000 (0:00:02.262) 0:04:53.611 ******* 2026-02-13 03:15:08.889518 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:08.889529 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:08.889539 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:08.889550 | orchestrator | 2026-02-13 03:15:08.889561 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-13 03:15:08.889571 | orchestrator | Friday 13 February 2026 03:15:08 +0000 (0:00:00.638) 0:04:54.250 ******* 2026-02-13 03:15:08.889582 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:08.889592 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:08.889625 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:08.889636 | orchestrator | 2026-02-13 03:15:08.889648 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-13 03:15:08.889658 | orchestrator | Friday 13 February 2026 03:15:08 +0000 (0:00:00.315) 0:04:54.566 ******* 2026-02-13 03:15:08.889669 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:08.889687 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.235508 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.235665 | orchestrator | 2026-02-13 03:15:52.235680 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-13 03:15:52.235689 | orchestrator | Friday 13 February 2026 03:15:08 +0000 (0:00:00.314) 0:04:54.881 ******* 2026-02-13 03:15:52.235698 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.235705 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.235713 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.235720 | orchestrator | 2026-02-13 03:15:52.235728 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-13 03:15:52.235736 | orchestrator | Friday 13 February 2026 03:15:09 +0000 (0:00:00.302) 0:04:55.183 ******* 2026-02-13 03:15:52.235743 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.235750 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.235757 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.235764 | orchestrator | 2026-02-13 03:15:52.235772 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-13 03:15:52.235794 | orchestrator | Friday 13 February 2026 03:15:09 +0000 (0:00:00.609) 0:04:55.792 ******* 2026-02-13 03:15:52.235803 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.235811 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.235818 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.235825 | orchestrator | 2026-02-13 03:15:52.235833 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-13 03:15:52.235840 | orchestrator | Friday 13 February 2026 03:15:10 +0000 (0:00:00.535) 0:04:56.328 ******* 2026-02-13 03:15:52.235847 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.235855 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.235863 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.235870 | orchestrator | 2026-02-13 03:15:52.235877 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-13 03:15:52.235904 | orchestrator | Friday 13 February 2026 03:15:10 +0000 (0:00:00.649) 0:04:56.977 ******* 2026-02-13 03:15:52.235911 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.235918 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.235925 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.235932 | orchestrator | 2026-02-13 03:15:52.235939 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-13 03:15:52.235946 | orchestrator | Friday 13 February 2026 03:15:11 +0000 (0:00:00.626) 0:04:57.603 ******* 2026-02-13 03:15:52.235953 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.235960 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.235967 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.235974 | orchestrator | 2026-02-13 03:15:52.235981 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-13 03:15:52.235989 | orchestrator | Friday 13 February 2026 03:15:12 +0000 (0:00:00.836) 0:04:58.440 ******* 2026-02-13 03:15:52.235996 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236003 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236010 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236017 | orchestrator | 2026-02-13 03:15:52.236024 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-13 03:15:52.236031 | orchestrator | Friday 13 February 2026 03:15:13 +0000 (0:00:00.859) 0:04:59.299 ******* 2026-02-13 03:15:52.236038 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236045 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236052 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236059 | orchestrator | 2026-02-13 03:15:52.236068 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-13 03:15:52.236076 | orchestrator | Friday 13 February 2026 03:15:14 +0000 (0:00:00.882) 0:05:00.182 ******* 2026-02-13 03:15:52.236085 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:15:52.236093 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:15:52.236102 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:15:52.236110 | orchestrator | 2026-02-13 03:15:52.236119 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-13 03:15:52.236127 | orchestrator | Friday 13 February 2026 03:15:22 +0000 (0:00:08.052) 0:05:08.234 ******* 2026-02-13 03:15:52.236135 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236144 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236152 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236231 | orchestrator | 2026-02-13 03:15:52.236240 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-13 03:15:52.236248 | orchestrator | Friday 13 February 2026 03:15:23 +0000 (0:00:01.134) 0:05:09.369 ******* 2026-02-13 03:15:52.236256 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:15:52.236265 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:15:52.236273 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:15:52.236281 | orchestrator | 2026-02-13 03:15:52.236290 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-13 03:15:52.236298 | orchestrator | Friday 13 February 2026 03:15:33 +0000 (0:00:10.609) 0:05:19.979 ******* 2026-02-13 03:15:52.236307 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236315 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236323 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236331 | orchestrator | 2026-02-13 03:15:52.236339 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-13 03:15:52.236347 | orchestrator | Friday 13 February 2026 03:15:38 +0000 (0:00:04.718) 0:05:24.697 ******* 2026-02-13 03:15:52.236356 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:15:52.236364 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:15:52.236372 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:15:52.236381 | orchestrator | 2026-02-13 03:15:52.236389 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-13 03:15:52.236397 | orchestrator | Friday 13 February 2026 03:15:43 +0000 (0:00:04.412) 0:05:29.110 ******* 2026-02-13 03:15:52.236415 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236423 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236431 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236438 | orchestrator | 2026-02-13 03:15:52.236445 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-13 03:15:52.236452 | orchestrator | Friday 13 February 2026 03:15:43 +0000 (0:00:00.696) 0:05:29.806 ******* 2026-02-13 03:15:52.236460 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236467 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236474 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236481 | orchestrator | 2026-02-13 03:15:52.236504 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-13 03:15:52.236512 | orchestrator | Friday 13 February 2026 03:15:44 +0000 (0:00:00.383) 0:05:30.190 ******* 2026-02-13 03:15:52.236520 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236527 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236534 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236542 | orchestrator | 2026-02-13 03:15:52.236549 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-13 03:15:52.236556 | orchestrator | Friday 13 February 2026 03:15:44 +0000 (0:00:00.346) 0:05:30.537 ******* 2026-02-13 03:15:52.236564 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236571 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236578 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236585 | orchestrator | 2026-02-13 03:15:52.236593 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-13 03:15:52.236622 | orchestrator | Friday 13 February 2026 03:15:44 +0000 (0:00:00.356) 0:05:30.893 ******* 2026-02-13 03:15:52.236630 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236643 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236650 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236657 | orchestrator | 2026-02-13 03:15:52.236665 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-13 03:15:52.236672 | orchestrator | Friday 13 February 2026 03:15:45 +0000 (0:00:00.675) 0:05:31.569 ******* 2026-02-13 03:15:52.236679 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:15:52.236686 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:15:52.236693 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:15:52.236701 | orchestrator | 2026-02-13 03:15:52.236708 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-13 03:15:52.236715 | orchestrator | Friday 13 February 2026 03:15:45 +0000 (0:00:00.361) 0:05:31.931 ******* 2026-02-13 03:15:52.236722 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236729 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236737 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236744 | orchestrator | 2026-02-13 03:15:52.236751 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-13 03:15:52.236758 | orchestrator | Friday 13 February 2026 03:15:50 +0000 (0:00:04.691) 0:05:36.622 ******* 2026-02-13 03:15:52.236765 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:15:52.236773 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:15:52.236780 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:15:52.236787 | orchestrator | 2026-02-13 03:15:52.236794 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:15:52.236802 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-13 03:15:52.236811 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-13 03:15:52.236818 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-13 03:15:52.236825 | orchestrator | 2026-02-13 03:15:52.236838 | orchestrator | 2026-02-13 03:15:52.236845 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:15:52.236853 | orchestrator | Friday 13 February 2026 03:15:51 +0000 (0:00:00.800) 0:05:37.422 ******* 2026-02-13 03:15:52.236860 | orchestrator | =============================================================================== 2026-02-13 03:15:52.236867 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 10.61s 2026-02-13 03:15:52.236874 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.05s 2026-02-13 03:15:52.236882 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.24s 2026-02-13 03:15:52.236889 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.39s 2026-02-13 03:15:52.236896 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.72s 2026-02-13 03:15:52.236903 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.69s 2026-02-13 03:15:52.236910 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.41s 2026-02-13 03:15:52.236919 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.08s 2026-02-13 03:15:52.236930 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.05s 2026-02-13 03:15:52.236942 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 3.96s 2026-02-13 03:15:52.236954 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.83s 2026-02-13 03:15:52.236967 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.74s 2026-02-13 03:15:52.236979 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 3.47s 2026-02-13 03:15:52.236990 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.46s 2026-02-13 03:15:52.237002 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.39s 2026-02-13 03:15:52.237014 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 3.36s 2026-02-13 03:15:52.237021 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.34s 2026-02-13 03:15:52.237029 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.34s 2026-02-13 03:15:52.237036 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.28s 2026-02-13 03:15:52.237043 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.22s 2026-02-13 03:15:54.533359 | orchestrator | 2026-02-13 03:15:54 | INFO  | Task 4decc79b-7bf0-48d0-a679-48be7d126a21 (opensearch) was prepared for execution. 2026-02-13 03:15:54.533472 | orchestrator | 2026-02-13 03:15:54 | INFO  | It takes a moment until task 4decc79b-7bf0-48d0-a679-48be7d126a21 (opensearch) has been started and output is visible here. 2026-02-13 03:16:04.757521 | orchestrator | 2026-02-13 03:16:04.757749 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:16:04.757780 | orchestrator | 2026-02-13 03:16:04.757794 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:16:04.757806 | orchestrator | Friday 13 February 2026 03:15:58 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-02-13 03:16:04.757817 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:16:04.757830 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:16:04.757841 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:16:04.757852 | orchestrator | 2026-02-13 03:16:04.757863 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:16:04.757874 | orchestrator | Friday 13 February 2026 03:15:58 +0000 (0:00:00.277) 0:00:00.523 ******* 2026-02-13 03:16:04.757904 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-13 03:16:04.757916 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-13 03:16:04.757927 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-13 03:16:04.757938 | orchestrator | 2026-02-13 03:16:04.757949 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-13 03:16:04.757984 | orchestrator | 2026-02-13 03:16:04.757996 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 03:16:04.758006 | orchestrator | Friday 13 February 2026 03:15:59 +0000 (0:00:00.383) 0:00:00.906 ******* 2026-02-13 03:16:04.758069 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:16:04.758085 | orchestrator | 2026-02-13 03:16:04.758098 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-13 03:16:04.758111 | orchestrator | Friday 13 February 2026 03:15:59 +0000 (0:00:00.484) 0:00:01.391 ******* 2026-02-13 03:16:04.758160 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 03:16:04.758174 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 03:16:04.758188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 03:16:04.758202 | orchestrator | 2026-02-13 03:16:04.758215 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-13 03:16:04.758228 | orchestrator | Friday 13 February 2026 03:16:00 +0000 (0:00:00.657) 0:00:02.048 ******* 2026-02-13 03:16:04.758244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:04.758262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:04.758297 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:04.758321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:04.758347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:04.758362 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:04.758376 | orchestrator | 2026-02-13 03:16:04.758389 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 03:16:04.758403 | orchestrator | Friday 13 February 2026 03:16:01 +0000 (0:00:01.563) 0:00:03.612 ******* 2026-02-13 03:16:04.758414 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:16:04.758425 | orchestrator | 2026-02-13 03:16:04.758436 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-13 03:16:04.758447 | orchestrator | Friday 13 February 2026 03:16:02 +0000 (0:00:00.511) 0:00:04.123 ******* 2026-02-13 03:16:04.758473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:05.537650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:05.537784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:05.537817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:05.537843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:05.537969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:05.537986 | orchestrator | 2026-02-13 03:16:05.538000 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-13 03:16:05.538012 | orchestrator | Friday 13 February 2026 03:16:04 +0000 (0:00:02.351) 0:00:06.475 ******* 2026-02-13 03:16:05.538084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:05.538098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:05.538110 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:16:05.538123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:05.538160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:06.614127 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:16:06.614230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:06.614251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:06.614266 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:16:06.614278 | orchestrator | 2026-02-13 03:16:06.614292 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-13 03:16:06.614313 | orchestrator | Friday 13 February 2026 03:16:05 +0000 (0:00:00.777) 0:00:07.253 ******* 2026-02-13 03:16:06.614361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:06.614436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:06.614484 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:16:06.614505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:06.614526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:06.614547 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:16:06.614579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-13 03:16:06.614834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-13 03:16:06.614880 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:16:06.614893 | orchestrator | 2026-02-13 03:16:06.614904 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-13 03:16:06.614930 | orchestrator | Friday 13 February 2026 03:16:06 +0000 (0:00:01.071) 0:00:08.324 ******* 2026-02-13 03:16:14.668110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:14.668222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:14.668239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:14.668290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:14.668324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:14.668338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:16:14.668358 | orchestrator | 2026-02-13 03:16:14.668372 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-13 03:16:14.668384 | orchestrator | Friday 13 February 2026 03:16:08 +0000 (0:00:02.382) 0:00:10.707 ******* 2026-02-13 03:16:14.668396 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:16:14.668408 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:16:14.668420 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:16:14.668431 | orchestrator | 2026-02-13 03:16:14.668442 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-13 03:16:14.668453 | orchestrator | Friday 13 February 2026 03:16:11 +0000 (0:00:02.263) 0:00:12.970 ******* 2026-02-13 03:16:14.668464 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:16:14.668478 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:16:14.668496 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:16:14.668514 | orchestrator | 2026-02-13 03:16:14.668531 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-13 03:16:14.668549 | orchestrator | Friday 13 February 2026 03:16:12 +0000 (0:00:01.760) 0:00:14.730 ******* 2026-02-13 03:16:14.668570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:14.668625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:16:14.668655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-13 03:18:58.488037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:18:58.488201 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:18:58.488237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-13 03:18:58.488251 | orchestrator | 2026-02-13 03:18:58.488265 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 03:18:58.488277 | orchestrator | Friday 13 February 2026 03:16:14 +0000 (0:00:01.652) 0:00:16.382 ******* 2026-02-13 03:18:58.488288 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:18:58.488300 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:18:58.488310 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:18:58.488321 | orchestrator | 2026-02-13 03:18:58.488333 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 03:18:58.488344 | orchestrator | Friday 13 February 2026 03:16:14 +0000 (0:00:00.268) 0:00:16.651 ******* 2026-02-13 03:18:58.488355 | orchestrator | 2026-02-13 03:18:58.488366 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 03:18:58.488377 | orchestrator | Friday 13 February 2026 03:16:14 +0000 (0:00:00.063) 0:00:16.715 ******* 2026-02-13 03:18:58.488387 | orchestrator | 2026-02-13 03:18:58.488397 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 03:18:58.488416 | orchestrator | Friday 13 February 2026 03:16:15 +0000 (0:00:00.063) 0:00:16.778 ******* 2026-02-13 03:18:58.488427 | orchestrator | 2026-02-13 03:18:58.488438 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-13 03:18:58.488466 | orchestrator | Friday 13 February 2026 03:16:15 +0000 (0:00:00.065) 0:00:16.843 ******* 2026-02-13 03:18:58.488477 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:18:58.488488 | orchestrator | 2026-02-13 03:18:58.488499 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-13 03:18:58.488510 | orchestrator | Friday 13 February 2026 03:16:15 +0000 (0:00:00.197) 0:00:17.041 ******* 2026-02-13 03:18:58.488521 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:18:58.488531 | orchestrator | 2026-02-13 03:18:58.488542 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-13 03:18:58.488553 | orchestrator | Friday 13 February 2026 03:16:15 +0000 (0:00:00.591) 0:00:17.632 ******* 2026-02-13 03:18:58.488563 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:18:58.488576 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:18:58.488589 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:18:58.488601 | orchestrator | 2026-02-13 03:18:58.488614 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-13 03:18:58.488651 | orchestrator | Friday 13 February 2026 03:17:26 +0000 (0:01:11.012) 0:01:28.644 ******* 2026-02-13 03:18:58.488664 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:18:58.488677 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:18:58.488688 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:18:58.488700 | orchestrator | 2026-02-13 03:18:58.488712 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 03:18:58.488724 | orchestrator | Friday 13 February 2026 03:18:47 +0000 (0:01:21.007) 0:02:49.652 ******* 2026-02-13 03:18:58.488737 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:18:58.488749 | orchestrator | 2026-02-13 03:18:58.488762 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-13 03:18:58.488774 | orchestrator | Friday 13 February 2026 03:18:48 +0000 (0:00:00.490) 0:02:50.142 ******* 2026-02-13 03:18:58.488786 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:18:58.488799 | orchestrator | 2026-02-13 03:18:58.488811 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-13 03:18:58.488823 | orchestrator | Friday 13 February 2026 03:18:50 +0000 (0:00:02.561) 0:02:52.703 ******* 2026-02-13 03:18:58.488835 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:18:58.488847 | orchestrator | 2026-02-13 03:18:58.488859 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-13 03:18:58.488871 | orchestrator | Friday 13 February 2026 03:18:53 +0000 (0:00:02.207) 0:02:54.911 ******* 2026-02-13 03:18:58.488884 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:18:58.488896 | orchestrator | 2026-02-13 03:18:58.488908 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-13 03:18:58.488921 | orchestrator | Friday 13 February 2026 03:18:55 +0000 (0:00:02.663) 0:02:57.575 ******* 2026-02-13 03:18:58.488933 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:18:58.488944 | orchestrator | 2026-02-13 03:18:58.488955 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:18:58.488966 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:18:58.488978 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 03:18:58.488996 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 03:18:58.489007 | orchestrator | 2026-02-13 03:18:58.489018 | orchestrator | 2026-02-13 03:18:58.489036 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:18:58.489047 | orchestrator | Friday 13 February 2026 03:18:58 +0000 (0:00:02.609) 0:03:00.184 ******* 2026-02-13 03:18:58.489058 | orchestrator | =============================================================================== 2026-02-13 03:18:58.489068 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 81.01s 2026-02-13 03:18:58.489079 | orchestrator | opensearch : Restart opensearch container ------------------------------ 71.01s 2026-02-13 03:18:58.489089 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.66s 2026-02-13 03:18:58.489100 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.61s 2026-02-13 03:18:58.489110 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.56s 2026-02-13 03:18:58.489121 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2026-02-13 03:18:58.489131 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.35s 2026-02-13 03:18:58.489142 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.26s 2026-02-13 03:18:58.489152 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2026-02-13 03:18:58.489163 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.76s 2026-02-13 03:18:58.489173 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.65s 2026-02-13 03:18:58.489184 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.56s 2026-02-13 03:18:58.489194 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.07s 2026-02-13 03:18:58.489205 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.78s 2026-02-13 03:18:58.489215 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.66s 2026-02-13 03:18:58.489226 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.59s 2026-02-13 03:18:58.489243 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2026-02-13 03:18:58.792149 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.49s 2026-02-13 03:18:58.792272 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2026-02-13 03:18:58.792291 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.38s 2026-02-13 03:19:01.072994 | orchestrator | 2026-02-13 03:19:01 | INFO  | Task 8c928c25-de18-4307-95a9-ffed076296ff (memcached) was prepared for execution. 2026-02-13 03:19:01.073135 | orchestrator | 2026-02-13 03:19:01 | INFO  | It takes a moment until task 8c928c25-de18-4307-95a9-ffed076296ff (memcached) has been started and output is visible here. 2026-02-13 03:19:17.585828 | orchestrator | 2026-02-13 03:19:17.585936 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:19:17.585953 | orchestrator | 2026-02-13 03:19:17.585967 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:19:17.585981 | orchestrator | Friday 13 February 2026 03:19:05 +0000 (0:00:00.252) 0:00:00.252 ******* 2026-02-13 03:19:17.585992 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:19:17.586004 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:19:17.586078 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:19:17.586092 | orchestrator | 2026-02-13 03:19:17.586104 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:19:17.586115 | orchestrator | Friday 13 February 2026 03:19:05 +0000 (0:00:00.301) 0:00:00.554 ******* 2026-02-13 03:19:17.586127 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-13 03:19:17.586139 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-13 03:19:17.586150 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-13 03:19:17.586161 | orchestrator | 2026-02-13 03:19:17.586172 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-13 03:19:17.586209 | orchestrator | 2026-02-13 03:19:17.586221 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-13 03:19:17.586232 | orchestrator | Friday 13 February 2026 03:19:05 +0000 (0:00:00.404) 0:00:00.958 ******* 2026-02-13 03:19:17.586243 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:19:17.586255 | orchestrator | 2026-02-13 03:19:17.586266 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-13 03:19:17.586277 | orchestrator | Friday 13 February 2026 03:19:06 +0000 (0:00:00.462) 0:00:01.420 ******* 2026-02-13 03:19:17.586289 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-13 03:19:17.586303 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-13 03:19:17.586315 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-13 03:19:17.586327 | orchestrator | 2026-02-13 03:19:17.586340 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-13 03:19:17.586353 | orchestrator | Friday 13 February 2026 03:19:07 +0000 (0:00:00.683) 0:00:02.104 ******* 2026-02-13 03:19:17.586365 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-13 03:19:17.586377 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-13 03:19:17.586390 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-13 03:19:17.586402 | orchestrator | 2026-02-13 03:19:17.586415 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-13 03:19:17.586428 | orchestrator | Friday 13 February 2026 03:19:08 +0000 (0:00:01.648) 0:00:03.753 ******* 2026-02-13 03:19:17.586455 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:17.586468 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:19:17.586481 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:19:17.586493 | orchestrator | 2026-02-13 03:19:17.586505 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-13 03:19:17.586518 | orchestrator | Friday 13 February 2026 03:19:10 +0000 (0:00:01.404) 0:00:05.158 ******* 2026-02-13 03:19:17.586530 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:17.586543 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:19:17.586555 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:19:17.586567 | orchestrator | 2026-02-13 03:19:17.586580 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:19:17.586593 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:17.586607 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:17.586620 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:17.586656 | orchestrator | 2026-02-13 03:19:17.586668 | orchestrator | 2026-02-13 03:19:17.586679 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:19:17.586690 | orchestrator | Friday 13 February 2026 03:19:17 +0000 (0:00:07.037) 0:00:12.196 ******* 2026-02-13 03:19:17.586701 | orchestrator | =============================================================================== 2026-02-13 03:19:17.586712 | orchestrator | memcached : Restart memcached container --------------------------------- 7.04s 2026-02-13 03:19:17.586722 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.65s 2026-02-13 03:19:17.586733 | orchestrator | memcached : Check memcached container ----------------------------------- 1.40s 2026-02-13 03:19:17.586744 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.68s 2026-02-13 03:19:17.586755 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.46s 2026-02-13 03:19:17.586766 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2026-02-13 03:19:17.586777 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-13 03:19:19.932833 | orchestrator | 2026-02-13 03:19:19 | INFO  | Task f357a3dd-a7c2-4b96-a98f-6f121f42e4c3 (redis) was prepared for execution. 2026-02-13 03:19:19.932939 | orchestrator | 2026-02-13 03:19:19 | INFO  | It takes a moment until task f357a3dd-a7c2-4b96-a98f-6f121f42e4c3 (redis) has been started and output is visible here. 2026-02-13 03:19:28.424568 | orchestrator | 2026-02-13 03:19:28.424770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:19:28.424802 | orchestrator | 2026-02-13 03:19:28.424823 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:19:28.424843 | orchestrator | Friday 13 February 2026 03:19:23 +0000 (0:00:00.228) 0:00:00.228 ******* 2026-02-13 03:19:28.424862 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:19:28.424882 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:19:28.424894 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:19:28.424905 | orchestrator | 2026-02-13 03:19:28.424916 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:19:28.424927 | orchestrator | Friday 13 February 2026 03:19:24 +0000 (0:00:00.273) 0:00:00.501 ******* 2026-02-13 03:19:28.424938 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-13 03:19:28.424949 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-13 03:19:28.424960 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-13 03:19:28.424971 | orchestrator | 2026-02-13 03:19:28.424981 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-13 03:19:28.424992 | orchestrator | 2026-02-13 03:19:28.425003 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-13 03:19:28.425013 | orchestrator | Friday 13 February 2026 03:19:24 +0000 (0:00:00.322) 0:00:00.824 ******* 2026-02-13 03:19:28.425024 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:19:28.425036 | orchestrator | 2026-02-13 03:19:28.425047 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-13 03:19:28.425057 | orchestrator | Friday 13 February 2026 03:19:24 +0000 (0:00:00.436) 0:00:01.261 ******* 2026-02-13 03:19:28.425072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425088 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425101 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425203 | orchestrator | 2026-02-13 03:19:28.425216 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-13 03:19:28.425229 | orchestrator | Friday 13 February 2026 03:19:26 +0000 (0:00:01.020) 0:00:02.282 ******* 2026-02-13 03:19:28.425242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:28.425469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524202 | orchestrator | 2026-02-13 03:19:32.524232 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-13 03:19:32.524254 | orchestrator | Friday 13 February 2026 03:19:28 +0000 (0:00:02.395) 0:00:04.677 ******* 2026-02-13 03:19:32.524275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524415 | orchestrator | 2026-02-13 03:19:32.524426 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-13 03:19:32.524437 | orchestrator | Friday 13 February 2026 03:19:30 +0000 (0:00:02.446) 0:00:07.123 ******* 2026-02-13 03:19:32.524449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:32.524528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 03:19:42.471003 | orchestrator | 2026-02-13 03:19:42.471132 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 03:19:42.471149 | orchestrator | Friday 13 February 2026 03:19:32 +0000 (0:00:01.413) 0:00:08.536 ******* 2026-02-13 03:19:42.471159 | orchestrator | 2026-02-13 03:19:42.471169 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 03:19:42.471179 | orchestrator | Friday 13 February 2026 03:19:32 +0000 (0:00:00.082) 0:00:08.619 ******* 2026-02-13 03:19:42.471189 | orchestrator | 2026-02-13 03:19:42.471199 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 03:19:42.471208 | orchestrator | Friday 13 February 2026 03:19:32 +0000 (0:00:00.080) 0:00:08.700 ******* 2026-02-13 03:19:42.471218 | orchestrator | 2026-02-13 03:19:42.471227 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-13 03:19:42.471237 | orchestrator | Friday 13 February 2026 03:19:32 +0000 (0:00:00.067) 0:00:08.768 ******* 2026-02-13 03:19:42.471247 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:19:42.471258 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:19:42.471267 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:42.471277 | orchestrator | 2026-02-13 03:19:42.471287 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-13 03:19:42.471296 | orchestrator | Friday 13 February 2026 03:19:39 +0000 (0:00:06.700) 0:00:15.468 ******* 2026-02-13 03:19:42.471330 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:42.471340 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:19:42.471350 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:19:42.471360 | orchestrator | 2026-02-13 03:19:42.471370 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:19:42.471380 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:42.471391 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:42.471414 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:19:42.471424 | orchestrator | 2026-02-13 03:19:42.471434 | orchestrator | 2026-02-13 03:19:42.471444 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:19:42.471453 | orchestrator | Friday 13 February 2026 03:19:42 +0000 (0:00:02.946) 0:00:18.415 ******* 2026-02-13 03:19:42.471463 | orchestrator | =============================================================================== 2026-02-13 03:19:42.471472 | orchestrator | redis : Restart redis container ----------------------------------------- 6.70s 2026-02-13 03:19:42.471482 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 2.95s 2026-02-13 03:19:42.471491 | orchestrator | redis : Copying over redis config files --------------------------------- 2.45s 2026-02-13 03:19:42.471501 | orchestrator | redis : Copying over default config.json files -------------------------- 2.40s 2026-02-13 03:19:42.471510 | orchestrator | redis : Check redis containers ------------------------------------------ 1.41s 2026-02-13 03:19:42.471520 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.02s 2026-02-13 03:19:42.471529 | orchestrator | redis : include_tasks --------------------------------------------------- 0.44s 2026-02-13 03:19:42.471539 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.32s 2026-02-13 03:19:42.471548 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2026-02-13 03:19:42.471558 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-02-13 03:19:44.846141 | orchestrator | 2026-02-13 03:19:44 | INFO  | Task a8b55861-572b-40fc-8da3-26eec52fafc0 (mariadb) was prepared for execution. 2026-02-13 03:19:44.846246 | orchestrator | 2026-02-13 03:19:44 | INFO  | It takes a moment until task a8b55861-572b-40fc-8da3-26eec52fafc0 (mariadb) has been started and output is visible here. 2026-02-13 03:19:57.212586 | orchestrator | 2026-02-13 03:19:57.212769 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:19:57.212793 | orchestrator | 2026-02-13 03:19:57.212806 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:19:57.212820 | orchestrator | Friday 13 February 2026 03:19:48 +0000 (0:00:00.124) 0:00:00.124 ******* 2026-02-13 03:19:57.212833 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:19:57.212845 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:19:57.212853 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:19:57.212860 | orchestrator | 2026-02-13 03:19:57.212867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:19:57.212876 | orchestrator | Friday 13 February 2026 03:19:49 +0000 (0:00:00.218) 0:00:00.342 ******* 2026-02-13 03:19:57.212884 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-13 03:19:57.212892 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-13 03:19:57.212899 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-13 03:19:57.212906 | orchestrator | 2026-02-13 03:19:57.212914 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-13 03:19:57.212921 | orchestrator | 2026-02-13 03:19:57.212928 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-13 03:19:57.212956 | orchestrator | Friday 13 February 2026 03:19:49 +0000 (0:00:00.397) 0:00:00.740 ******* 2026-02-13 03:19:57.212963 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 03:19:57.212971 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 03:19:57.212978 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 03:19:57.212985 | orchestrator | 2026-02-13 03:19:57.212993 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 03:19:57.213000 | orchestrator | Friday 13 February 2026 03:19:49 +0000 (0:00:00.322) 0:00:01.062 ******* 2026-02-13 03:19:57.213008 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:19:57.213017 | orchestrator | 2026-02-13 03:19:57.213024 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-13 03:19:57.213031 | orchestrator | Friday 13 February 2026 03:19:50 +0000 (0:00:00.444) 0:00:01.507 ******* 2026-02-13 03:19:57.213057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:19:57.213088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:19:57.213108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:19:57.213117 | orchestrator | 2026-02-13 03:19:57.213125 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-13 03:19:57.213133 | orchestrator | Friday 13 February 2026 03:19:52 +0000 (0:00:02.221) 0:00:03.729 ******* 2026-02-13 03:19:57.213141 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:19:57.213151 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:19:57.213159 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:57.213167 | orchestrator | 2026-02-13 03:19:57.213176 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-13 03:19:57.213184 | orchestrator | Friday 13 February 2026 03:19:52 +0000 (0:00:00.498) 0:00:04.227 ******* 2026-02-13 03:19:57.213192 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:19:57.213201 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:19:57.213209 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:19:57.213217 | orchestrator | 2026-02-13 03:19:57.213226 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-13 03:19:57.213234 | orchestrator | Friday 13 February 2026 03:19:54 +0000 (0:00:01.336) 0:00:05.564 ******* 2026-02-13 03:19:57.213250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:20:04.988938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:20:04.989052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:20:04.989092 | orchestrator | 2026-02-13 03:20:04.989106 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-13 03:20:04.989119 | orchestrator | Friday 13 February 2026 03:19:57 +0000 (0:00:02.917) 0:00:08.481 ******* 2026-02-13 03:20:04.989130 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:20:04.989144 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:20:04.989154 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:20:04.989165 | orchestrator | 2026-02-13 03:20:04.989176 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-13 03:20:04.989205 | orchestrator | Friday 13 February 2026 03:19:58 +0000 (0:00:01.055) 0:00:09.537 ******* 2026-02-13 03:20:04.989217 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:20:04.989228 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:20:04.989239 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:20:04.989250 | orchestrator | 2026-02-13 03:20:04.989261 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 03:20:04.989272 | orchestrator | Friday 13 February 2026 03:20:02 +0000 (0:00:03.807) 0:00:13.344 ******* 2026-02-13 03:20:04.989283 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:20:04.989294 | orchestrator | 2026-02-13 03:20:04.989306 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-13 03:20:04.989316 | orchestrator | Friday 13 February 2026 03:20:02 +0000 (0:00:00.542) 0:00:13.887 ******* 2026-02-13 03:20:04.989335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:04.989356 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:20:04.989376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:09.739712 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:20:09.739843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:09.739904 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:20:09.739927 | orchestrator | 2026-02-13 03:20:09.739949 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-13 03:20:09.739970 | orchestrator | Friday 13 February 2026 03:20:04 +0000 (0:00:02.369) 0:00:16.256 ******* 2026-02-13 03:20:09.739992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:09.740014 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:20:09.740069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:09.740106 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:20:09.740129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:09.740151 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:20:09.740171 | orchestrator | 2026-02-13 03:20:09.740191 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-13 03:20:09.740212 | orchestrator | Friday 13 February 2026 03:20:07 +0000 (0:00:02.477) 0:00:18.734 ******* 2026-02-13 03:20:09.740257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:12.577193 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:20:12.577304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:12.577322 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:20:12.577363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 03:20:12.577397 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:20:12.577408 | orchestrator | 2026-02-13 03:20:12.577419 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-13 03:20:12.577431 | orchestrator | Friday 13 February 2026 03:20:09 +0000 (0:00:02.275) 0:00:21.010 ******* 2026-02-13 03:20:12.577460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:20:12.577473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:20:12.577498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 03:22:23.171128 | orchestrator | 2026-02-13 03:22:23.171244 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-13 03:22:23.171261 | orchestrator | Friday 13 February 2026 03:20:12 +0000 (0:00:02.836) 0:00:23.846 ******* 2026-02-13 03:22:23.171274 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:23.171286 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:22:23.171298 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:22:23.171309 | orchestrator | 2026-02-13 03:22:23.171320 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-13 03:22:23.171331 | orchestrator | Friday 13 February 2026 03:20:13 +0000 (0:00:00.804) 0:00:24.651 ******* 2026-02-13 03:22:23.171342 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.171354 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.171365 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.171375 | orchestrator | 2026-02-13 03:22:23.171386 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-13 03:22:23.171397 | orchestrator | Friday 13 February 2026 03:20:13 +0000 (0:00:00.483) 0:00:25.134 ******* 2026-02-13 03:22:23.171408 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.171418 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.171429 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.171439 | orchestrator | 2026-02-13 03:22:23.171450 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-13 03:22:23.171461 | orchestrator | Friday 13 February 2026 03:20:14 +0000 (0:00:00.314) 0:00:25.448 ******* 2026-02-13 03:22:23.171473 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-13 03:22:23.171485 | orchestrator | ...ignoring 2026-02-13 03:22:23.171496 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-13 03:22:23.171507 | orchestrator | ...ignoring 2026-02-13 03:22:23.171518 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-13 03:22:23.171529 | orchestrator | ...ignoring 2026-02-13 03:22:23.171563 | orchestrator | 2026-02-13 03:22:23.171574 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-13 03:22:23.171585 | orchestrator | Friday 13 February 2026 03:20:24 +0000 (0:00:10.813) 0:00:36.261 ******* 2026-02-13 03:22:23.171596 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.171606 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.171617 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.171628 | orchestrator | 2026-02-13 03:22:23.171638 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-13 03:22:23.171649 | orchestrator | Friday 13 February 2026 03:20:25 +0000 (0:00:00.432) 0:00:36.694 ******* 2026-02-13 03:22:23.171662 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.171674 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.171686 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.171699 | orchestrator | 2026-02-13 03:22:23.171712 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-13 03:22:23.171724 | orchestrator | Friday 13 February 2026 03:20:26 +0000 (0:00:00.611) 0:00:37.305 ******* 2026-02-13 03:22:23.171737 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.171749 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.171761 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.171773 | orchestrator | 2026-02-13 03:22:23.171801 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-13 03:22:23.171815 | orchestrator | Friday 13 February 2026 03:20:26 +0000 (0:00:00.405) 0:00:37.710 ******* 2026-02-13 03:22:23.171828 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.171864 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.171877 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.171889 | orchestrator | 2026-02-13 03:22:23.171901 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-13 03:22:23.171911 | orchestrator | Friday 13 February 2026 03:20:26 +0000 (0:00:00.400) 0:00:38.111 ******* 2026-02-13 03:22:23.171922 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.171933 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.171944 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.171954 | orchestrator | 2026-02-13 03:22:23.171965 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-13 03:22:23.171977 | orchestrator | Friday 13 February 2026 03:20:27 +0000 (0:00:00.398) 0:00:38.510 ******* 2026-02-13 03:22:23.171987 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.171998 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.172009 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.172019 | orchestrator | 2026-02-13 03:22:23.172030 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 03:22:23.172041 | orchestrator | Friday 13 February 2026 03:20:28 +0000 (0:00:00.795) 0:00:39.305 ******* 2026-02-13 03:22:23.172052 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.172062 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.172073 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-13 03:22:23.172084 | orchestrator | 2026-02-13 03:22:23.172095 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-13 03:22:23.172106 | orchestrator | Friday 13 February 2026 03:20:28 +0000 (0:00:00.411) 0:00:39.717 ******* 2026-02-13 03:22:23.172116 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:23.172127 | orchestrator | 2026-02-13 03:22:23.172138 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-13 03:22:23.172149 | orchestrator | Friday 13 February 2026 03:20:38 +0000 (0:00:09.965) 0:00:49.682 ******* 2026-02-13 03:22:23.172159 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.172170 | orchestrator | 2026-02-13 03:22:23.172181 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 03:22:23.172192 | orchestrator | Friday 13 February 2026 03:20:38 +0000 (0:00:00.117) 0:00:49.800 ******* 2026-02-13 03:22:23.172203 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.172241 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.172253 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.172264 | orchestrator | 2026-02-13 03:22:23.172274 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-13 03:22:23.172285 | orchestrator | Friday 13 February 2026 03:20:39 +0000 (0:00:00.929) 0:00:50.729 ******* 2026-02-13 03:22:23.172296 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:23.172307 | orchestrator | 2026-02-13 03:22:23.172317 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-13 03:22:23.172328 | orchestrator | Friday 13 February 2026 03:20:46 +0000 (0:00:07.389) 0:00:58.118 ******* 2026-02-13 03:22:23.172339 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.172350 | orchestrator | 2026-02-13 03:22:23.172360 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-13 03:22:23.172371 | orchestrator | Friday 13 February 2026 03:20:48 +0000 (0:00:01.679) 0:00:59.798 ******* 2026-02-13 03:22:23.172381 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.172392 | orchestrator | 2026-02-13 03:22:23.172403 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-13 03:22:23.172414 | orchestrator | Friday 13 February 2026 03:20:50 +0000 (0:00:02.364) 0:01:02.162 ******* 2026-02-13 03:22:23.172424 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:23.172435 | orchestrator | 2026-02-13 03:22:23.172446 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-13 03:22:23.172457 | orchestrator | Friday 13 February 2026 03:20:51 +0000 (0:00:00.126) 0:01:02.289 ******* 2026-02-13 03:22:23.172467 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.172478 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:23.172488 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:23.172499 | orchestrator | 2026-02-13 03:22:23.172510 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-13 03:22:23.172520 | orchestrator | Friday 13 February 2026 03:20:51 +0000 (0:00:00.314) 0:01:02.604 ******* 2026-02-13 03:22:23.172531 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:23.172542 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-13 03:22:23.172552 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:22:23.172563 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:22:23.172574 | orchestrator | 2026-02-13 03:22:23.172585 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-13 03:22:23.172595 | orchestrator | skipping: no hosts matched 2026-02-13 03:22:23.172606 | orchestrator | 2026-02-13 03:22:23.172617 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-13 03:22:23.172628 | orchestrator | 2026-02-13 03:22:23.172638 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 03:22:23.172649 | orchestrator | Friday 13 February 2026 03:20:51 +0000 (0:00:00.517) 0:01:03.121 ******* 2026-02-13 03:22:23.172660 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:22:23.172670 | orchestrator | 2026-02-13 03:22:23.172681 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 03:22:23.172692 | orchestrator | Friday 13 February 2026 03:21:13 +0000 (0:00:21.917) 0:01:25.039 ******* 2026-02-13 03:22:23.172702 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.172713 | orchestrator | 2026-02-13 03:22:23.172724 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 03:22:23.172734 | orchestrator | Friday 13 February 2026 03:21:25 +0000 (0:00:11.583) 0:01:36.623 ******* 2026-02-13 03:22:23.172745 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:23.172756 | orchestrator | 2026-02-13 03:22:23.172771 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-13 03:22:23.172782 | orchestrator | 2026-02-13 03:22:23.172798 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 03:22:23.172809 | orchestrator | Friday 13 February 2026 03:21:27 +0000 (0:00:02.321) 0:01:38.944 ******* 2026-02-13 03:22:23.172827 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:22:23.172868 | orchestrator | 2026-02-13 03:22:23.172881 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 03:22:23.172891 | orchestrator | Friday 13 February 2026 03:21:45 +0000 (0:00:17.530) 0:01:56.475 ******* 2026-02-13 03:22:23.172902 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.172913 | orchestrator | 2026-02-13 03:22:23.172923 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 03:22:23.172934 | orchestrator | Friday 13 February 2026 03:22:01 +0000 (0:00:16.568) 0:02:13.043 ******* 2026-02-13 03:22:23.172945 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:23.172955 | orchestrator | 2026-02-13 03:22:23.172966 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-13 03:22:23.172977 | orchestrator | 2026-02-13 03:22:23.172987 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 03:22:23.172998 | orchestrator | Friday 13 February 2026 03:22:04 +0000 (0:00:02.448) 0:02:15.492 ******* 2026-02-13 03:22:23.173009 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:23.173019 | orchestrator | 2026-02-13 03:22:23.173030 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 03:22:23.173041 | orchestrator | Friday 13 February 2026 03:22:14 +0000 (0:00:10.260) 0:02:25.752 ******* 2026-02-13 03:22:23.173051 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.173062 | orchestrator | 2026-02-13 03:22:23.173072 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 03:22:23.173083 | orchestrator | Friday 13 February 2026 03:22:19 +0000 (0:00:05.511) 0:02:31.264 ******* 2026-02-13 03:22:23.173094 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:23.173104 | orchestrator | 2026-02-13 03:22:23.173115 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-13 03:22:23.173125 | orchestrator | 2026-02-13 03:22:23.173136 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-13 03:22:23.173147 | orchestrator | Friday 13 February 2026 03:22:22 +0000 (0:00:02.530) 0:02:33.795 ******* 2026-02-13 03:22:23.173157 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:22:23.173168 | orchestrator | 2026-02-13 03:22:23.173179 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-13 03:22:23.173197 | orchestrator | Friday 13 February 2026 03:22:23 +0000 (0:00:00.643) 0:02:34.438 ******* 2026-02-13 03:22:35.652039 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:35.652156 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:35.652171 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:35.652183 | orchestrator | 2026-02-13 03:22:35.652196 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-13 03:22:35.652208 | orchestrator | Friday 13 February 2026 03:22:25 +0000 (0:00:02.286) 0:02:36.724 ******* 2026-02-13 03:22:35.652219 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:35.652230 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:35.652240 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:35.652251 | orchestrator | 2026-02-13 03:22:35.652262 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-13 03:22:35.652273 | orchestrator | Friday 13 February 2026 03:22:27 +0000 (0:00:02.127) 0:02:38.852 ******* 2026-02-13 03:22:35.652284 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:35.652295 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:35.652305 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:35.652316 | orchestrator | 2026-02-13 03:22:35.652327 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-13 03:22:35.652338 | orchestrator | Friday 13 February 2026 03:22:29 +0000 (0:00:02.304) 0:02:41.156 ******* 2026-02-13 03:22:35.652349 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:35.652359 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:35.652370 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:22:35.652380 | orchestrator | 2026-02-13 03:22:35.652435 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-13 03:22:35.652458 | orchestrator | Friday 13 February 2026 03:22:31 +0000 (0:00:02.049) 0:02:43.206 ******* 2026-02-13 03:22:35.652470 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:35.652482 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:35.652492 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:35.652503 | orchestrator | 2026-02-13 03:22:35.652513 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-13 03:22:35.652524 | orchestrator | Friday 13 February 2026 03:22:34 +0000 (0:00:03.014) 0:02:46.220 ******* 2026-02-13 03:22:35.652535 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:35.652546 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:22:35.652556 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:22:35.652567 | orchestrator | 2026-02-13 03:22:35.652578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:22:35.652592 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-13 03:22:35.652606 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-13 03:22:35.652619 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-13 03:22:35.652632 | orchestrator | 2026-02-13 03:22:35.652644 | orchestrator | 2026-02-13 03:22:35.652657 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:22:35.652668 | orchestrator | Friday 13 February 2026 03:22:35 +0000 (0:00:00.394) 0:02:46.615 ******* 2026-02-13 03:22:35.652679 | orchestrator | =============================================================================== 2026-02-13 03:22:35.652704 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.45s 2026-02-13 03:22:35.652715 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.15s 2026-02-13 03:22:35.652726 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.81s 2026-02-13 03:22:35.652736 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.26s 2026-02-13 03:22:35.652747 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.97s 2026-02-13 03:22:35.652758 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.39s 2026-02-13 03:22:35.652769 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.51s 2026-02-13 03:22:35.652780 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.77s 2026-02-13 03:22:35.652790 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.81s 2026-02-13 03:22:35.652801 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.01s 2026-02-13 03:22:35.652812 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.92s 2026-02-13 03:22:35.652822 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.84s 2026-02-13 03:22:35.652833 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.53s 2026-02-13 03:22:35.652843 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.48s 2026-02-13 03:22:35.652879 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.37s 2026-02-13 03:22:35.652890 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.36s 2026-02-13 03:22:35.652901 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.30s 2026-02-13 03:22:35.652911 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.29s 2026-02-13 03:22:35.652922 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.28s 2026-02-13 03:22:35.652932 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.22s 2026-02-13 03:22:37.935182 | orchestrator | 2026-02-13 03:22:37 | INFO  | Task c9f50a74-e1f9-48e6-a42a-bb9ec1ca587c (rabbitmq) was prepared for execution. 2026-02-13 03:22:37.935281 | orchestrator | 2026-02-13 03:22:37 | INFO  | It takes a moment until task c9f50a74-e1f9-48e6-a42a-bb9ec1ca587c (rabbitmq) has been started and output is visible here. 2026-02-13 03:22:50.713721 | orchestrator | 2026-02-13 03:22:50.713833 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:22:50.713850 | orchestrator | 2026-02-13 03:22:50.713863 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:22:50.713923 | orchestrator | Friday 13 February 2026 03:22:41 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-02-13 03:22:50.713936 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:50.713948 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:22:50.713960 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:22:50.713971 | orchestrator | 2026-02-13 03:22:50.713982 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:22:50.713994 | orchestrator | Friday 13 February 2026 03:22:42 +0000 (0:00:00.305) 0:00:00.467 ******* 2026-02-13 03:22:50.714005 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-13 03:22:50.714070 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-13 03:22:50.714084 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-13 03:22:50.714095 | orchestrator | 2026-02-13 03:22:50.714107 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-13 03:22:50.714118 | orchestrator | 2026-02-13 03:22:50.714131 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 03:22:50.714142 | orchestrator | Friday 13 February 2026 03:22:42 +0000 (0:00:00.544) 0:00:01.012 ******* 2026-02-13 03:22:50.714154 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:22:50.714166 | orchestrator | 2026-02-13 03:22:50.714177 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-13 03:22:50.714188 | orchestrator | Friday 13 February 2026 03:22:43 +0000 (0:00:00.523) 0:00:01.536 ******* 2026-02-13 03:22:50.714199 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:50.714210 | orchestrator | 2026-02-13 03:22:50.714221 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-13 03:22:50.714232 | orchestrator | Friday 13 February 2026 03:22:44 +0000 (0:00:00.962) 0:00:02.499 ******* 2026-02-13 03:22:50.714246 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714260 | orchestrator | 2026-02-13 03:22:50.714272 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-13 03:22:50.714284 | orchestrator | Friday 13 February 2026 03:22:44 +0000 (0:00:00.385) 0:00:02.884 ******* 2026-02-13 03:22:50.714297 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714310 | orchestrator | 2026-02-13 03:22:50.714322 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-13 03:22:50.714334 | orchestrator | Friday 13 February 2026 03:22:45 +0000 (0:00:00.355) 0:00:03.240 ******* 2026-02-13 03:22:50.714347 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714359 | orchestrator | 2026-02-13 03:22:50.714370 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-13 03:22:50.714381 | orchestrator | Friday 13 February 2026 03:22:45 +0000 (0:00:00.350) 0:00:03.590 ******* 2026-02-13 03:22:50.714392 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714403 | orchestrator | 2026-02-13 03:22:50.714414 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 03:22:50.714438 | orchestrator | Friday 13 February 2026 03:22:45 +0000 (0:00:00.535) 0:00:04.125 ******* 2026-02-13 03:22:50.714478 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:22:50.714514 | orchestrator | 2026-02-13 03:22:50.714526 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-13 03:22:50.714536 | orchestrator | Friday 13 February 2026 03:22:46 +0000 (0:00:00.820) 0:00:04.946 ******* 2026-02-13 03:22:50.714547 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:22:50.714558 | orchestrator | 2026-02-13 03:22:50.714568 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-13 03:22:50.714579 | orchestrator | Friday 13 February 2026 03:22:47 +0000 (0:00:00.828) 0:00:05.774 ******* 2026-02-13 03:22:50.714590 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714601 | orchestrator | 2026-02-13 03:22:50.714612 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-13 03:22:50.714622 | orchestrator | Friday 13 February 2026 03:22:47 +0000 (0:00:00.362) 0:00:06.137 ******* 2026-02-13 03:22:50.714633 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:22:50.714644 | orchestrator | 2026-02-13 03:22:50.714655 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-13 03:22:50.714665 | orchestrator | Friday 13 February 2026 03:22:48 +0000 (0:00:00.358) 0:00:06.495 ******* 2026-02-13 03:22:50.714701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:22:50.714717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:22:50.714730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:22:50.714751 | orchestrator | 2026-02-13 03:22:50.714768 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-13 03:22:50.714779 | orchestrator | Friday 13 February 2026 03:22:49 +0000 (0:00:00.813) 0:00:07.308 ******* 2026-02-13 03:22:50.714791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:22:50.714813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:23:09.306356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:23:09.306478 | orchestrator | 2026-02-13 03:23:09.306496 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-13 03:23:09.306509 | orchestrator | Friday 13 February 2026 03:22:50 +0000 (0:00:01.605) 0:00:08.914 ******* 2026-02-13 03:23:09.306546 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 03:23:09.306558 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 03:23:09.306569 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 03:23:09.306580 | orchestrator | 2026-02-13 03:23:09.306592 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-13 03:23:09.306602 | orchestrator | Friday 13 February 2026 03:22:52 +0000 (0:00:01.394) 0:00:10.309 ******* 2026-02-13 03:23:09.306627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 03:23:09.306639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 03:23:09.306650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 03:23:09.306660 | orchestrator | 2026-02-13 03:23:09.306671 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-13 03:23:09.306682 | orchestrator | Friday 13 February 2026 03:22:53 +0000 (0:00:01.655) 0:00:11.964 ******* 2026-02-13 03:23:09.306693 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 03:23:09.306703 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 03:23:09.306714 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 03:23:09.306724 | orchestrator | 2026-02-13 03:23:09.306735 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-13 03:23:09.306746 | orchestrator | Friday 13 February 2026 03:22:55 +0000 (0:00:01.407) 0:00:13.372 ******* 2026-02-13 03:23:09.306757 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 03:23:09.306767 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 03:23:09.306778 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 03:23:09.306789 | orchestrator | 2026-02-13 03:23:09.306800 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-13 03:23:09.306811 | orchestrator | Friday 13 February 2026 03:22:56 +0000 (0:00:01.647) 0:00:15.020 ******* 2026-02-13 03:23:09.306821 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 03:23:09.306832 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 03:23:09.306843 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 03:23:09.306854 | orchestrator | 2026-02-13 03:23:09.306865 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-13 03:23:09.306876 | orchestrator | Friday 13 February 2026 03:22:58 +0000 (0:00:01.391) 0:00:16.411 ******* 2026-02-13 03:23:09.306889 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 03:23:09.306933 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 03:23:09.306946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 03:23:09.306958 | orchestrator | 2026-02-13 03:23:09.306971 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 03:23:09.306984 | orchestrator | Friday 13 February 2026 03:22:59 +0000 (0:00:01.394) 0:00:17.805 ******* 2026-02-13 03:23:09.306997 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:23:09.307011 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:23:09.307041 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:23:09.307065 | orchestrator | 2026-02-13 03:23:09.307077 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-13 03:23:09.307090 | orchestrator | Friday 13 February 2026 03:22:59 +0000 (0:00:00.405) 0:00:18.211 ******* 2026-02-13 03:23:09.307104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:23:09.307126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:23:09.307141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 03:23:09.307154 | orchestrator | 2026-02-13 03:23:09.307168 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-13 03:23:09.307180 | orchestrator | Friday 13 February 2026 03:23:01 +0000 (0:00:01.145) 0:00:19.356 ******* 2026-02-13 03:23:09.307193 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:23:09.307206 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:23:09.307218 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:23:09.307231 | orchestrator | 2026-02-13 03:23:09.307243 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-13 03:23:09.307260 | orchestrator | Friday 13 February 2026 03:23:01 +0000 (0:00:00.844) 0:00:20.200 ******* 2026-02-13 03:23:09.307270 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:23:09.307281 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:23:09.307292 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:23:09.307303 | orchestrator | 2026-02-13 03:23:09.307314 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-13 03:23:09.307332 | orchestrator | Friday 13 February 2026 03:23:09 +0000 (0:00:07.299) 0:00:27.500 ******* 2026-02-13 03:24:45.489779 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:24:45.489886 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:24:45.489903 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:24:45.489916 | orchestrator | 2026-02-13 03:24:45.489929 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 03:24:45.489941 | orchestrator | 2026-02-13 03:24:45.489953 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 03:24:45.489965 | orchestrator | Friday 13 February 2026 03:23:09 +0000 (0:00:00.483) 0:00:27.983 ******* 2026-02-13 03:24:45.490080 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:24:45.490097 | orchestrator | 2026-02-13 03:24:45.490108 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 03:24:45.490119 | orchestrator | Friday 13 February 2026 03:23:10 +0000 (0:00:00.591) 0:00:28.575 ******* 2026-02-13 03:24:45.490130 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:24:45.490141 | orchestrator | 2026-02-13 03:24:45.490152 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 03:24:45.490163 | orchestrator | Friday 13 February 2026 03:23:10 +0000 (0:00:00.231) 0:00:28.807 ******* 2026-02-13 03:24:45.490174 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:24:45.490185 | orchestrator | 2026-02-13 03:24:45.490196 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 03:24:45.490207 | orchestrator | Friday 13 February 2026 03:23:12 +0000 (0:00:01.635) 0:00:30.442 ******* 2026-02-13 03:24:45.490218 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:24:45.490229 | orchestrator | 2026-02-13 03:24:45.490241 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 03:24:45.490252 | orchestrator | 2026-02-13 03:24:45.490262 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 03:24:45.490273 | orchestrator | Friday 13 February 2026 03:24:06 +0000 (0:00:53.878) 0:01:24.321 ******* 2026-02-13 03:24:45.490284 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:24:45.490295 | orchestrator | 2026-02-13 03:24:45.490306 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 03:24:45.490316 | orchestrator | Friday 13 February 2026 03:24:06 +0000 (0:00:00.606) 0:01:24.927 ******* 2026-02-13 03:24:45.490327 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:24:45.490338 | orchestrator | 2026-02-13 03:24:45.490349 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 03:24:45.490360 | orchestrator | Friday 13 February 2026 03:24:06 +0000 (0:00:00.229) 0:01:25.156 ******* 2026-02-13 03:24:45.490370 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:24:45.490381 | orchestrator | 2026-02-13 03:24:45.490392 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 03:24:45.490430 | orchestrator | Friday 13 February 2026 03:24:08 +0000 (0:00:01.545) 0:01:26.702 ******* 2026-02-13 03:24:45.490442 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:24:45.490453 | orchestrator | 2026-02-13 03:24:45.490463 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 03:24:45.490474 | orchestrator | 2026-02-13 03:24:45.490485 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 03:24:45.490496 | orchestrator | Friday 13 February 2026 03:24:23 +0000 (0:00:14.605) 0:01:41.308 ******* 2026-02-13 03:24:45.490507 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:24:45.490518 | orchestrator | 2026-02-13 03:24:45.490551 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 03:24:45.490563 | orchestrator | Friday 13 February 2026 03:24:23 +0000 (0:00:00.746) 0:01:42.055 ******* 2026-02-13 03:24:45.490573 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:24:45.490584 | orchestrator | 2026-02-13 03:24:45.490595 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 03:24:45.490606 | orchestrator | Friday 13 February 2026 03:24:24 +0000 (0:00:00.233) 0:01:42.289 ******* 2026-02-13 03:24:45.490617 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:24:45.490628 | orchestrator | 2026-02-13 03:24:45.490639 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 03:24:45.490650 | orchestrator | Friday 13 February 2026 03:24:30 +0000 (0:00:06.623) 0:01:48.912 ******* 2026-02-13 03:24:45.490660 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:24:45.490671 | orchestrator | 2026-02-13 03:24:45.490682 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-13 03:24:45.490692 | orchestrator | 2026-02-13 03:24:45.490703 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-13 03:24:45.490714 | orchestrator | Friday 13 February 2026 03:24:42 +0000 (0:00:11.764) 0:02:00.677 ******* 2026-02-13 03:24:45.490724 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:24:45.490735 | orchestrator | 2026-02-13 03:24:45.490746 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-13 03:24:45.490756 | orchestrator | Friday 13 February 2026 03:24:42 +0000 (0:00:00.453) 0:02:01.130 ******* 2026-02-13 03:24:45.490767 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-13 03:24:45.490777 | orchestrator | enable_outward_rabbitmq_True 2026-02-13 03:24:45.490788 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-13 03:24:45.490799 | orchestrator | outward_rabbitmq_restart 2026-02-13 03:24:45.490810 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:24:45.490821 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:24:45.490831 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:24:45.490842 | orchestrator | 2026-02-13 03:24:45.490853 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-13 03:24:45.490863 | orchestrator | skipping: no hosts matched 2026-02-13 03:24:45.490874 | orchestrator | 2026-02-13 03:24:45.490885 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-13 03:24:45.490896 | orchestrator | skipping: no hosts matched 2026-02-13 03:24:45.490906 | orchestrator | 2026-02-13 03:24:45.490917 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-13 03:24:45.490928 | orchestrator | skipping: no hosts matched 2026-02-13 03:24:45.490938 | orchestrator | 2026-02-13 03:24:45.490949 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:24:45.491005 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-13 03:24:45.491020 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:24:45.491031 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:24:45.491042 | orchestrator | 2026-02-13 03:24:45.491053 | orchestrator | 2026-02-13 03:24:45.491064 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:24:45.491075 | orchestrator | Friday 13 February 2026 03:24:45 +0000 (0:00:02.250) 0:02:03.380 ******* 2026-02-13 03:24:45.491085 | orchestrator | =============================================================================== 2026-02-13 03:24:45.491096 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.25s 2026-02-13 03:24:45.491107 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 9.81s 2026-02-13 03:24:45.491127 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.30s 2026-02-13 03:24:45.491138 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.25s 2026-02-13 03:24:45.491148 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.95s 2026-02-13 03:24:45.491159 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.66s 2026-02-13 03:24:45.491170 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.65s 2026-02-13 03:24:45.491181 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.61s 2026-02-13 03:24:45.491191 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.41s 2026-02-13 03:24:45.491202 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.39s 2026-02-13 03:24:45.491213 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.39s 2026-02-13 03:24:45.491223 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.39s 2026-02-13 03:24:45.491234 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.15s 2026-02-13 03:24:45.491245 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.96s 2026-02-13 03:24:45.491262 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.84s 2026-02-13 03:24:45.491273 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.83s 2026-02-13 03:24:45.491284 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.82s 2026-02-13 03:24:45.491295 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.81s 2026-02-13 03:24:45.491306 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.69s 2026-02-13 03:24:45.491316 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2026-02-13 03:24:47.745316 | orchestrator | 2026-02-13 03:24:47 | INFO  | Task 15cd063e-e4ef-4dd1-8bbd-16077f42032f (openvswitch) was prepared for execution. 2026-02-13 03:24:47.745424 | orchestrator | 2026-02-13 03:24:47 | INFO  | It takes a moment until task 15cd063e-e4ef-4dd1-8bbd-16077f42032f (openvswitch) has been started and output is visible here. 2026-02-13 03:24:59.975649 | orchestrator | 2026-02-13 03:24:59.975779 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:24:59.975805 | orchestrator | 2026-02-13 03:24:59.975822 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:24:59.975840 | orchestrator | Friday 13 February 2026 03:24:51 +0000 (0:00:00.245) 0:00:00.245 ******* 2026-02-13 03:24:59.975859 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:24:59.975877 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:24:59.975896 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:24:59.975915 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:24:59.975933 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:24:59.975951 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:24:59.975971 | orchestrator | 2026-02-13 03:24:59.976042 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:24:59.976062 | orchestrator | Friday 13 February 2026 03:24:52 +0000 (0:00:00.677) 0:00:00.922 ******* 2026-02-13 03:24:59.976075 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976087 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976098 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976109 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976120 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976131 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 03:24:59.976142 | orchestrator | 2026-02-13 03:24:59.976189 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-13 03:24:59.976211 | orchestrator | 2026-02-13 03:24:59.976232 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-13 03:24:59.976252 | orchestrator | Friday 13 February 2026 03:24:53 +0000 (0:00:00.556) 0:00:01.479 ******* 2026-02-13 03:24:59.976267 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:24:59.976282 | orchestrator | 2026-02-13 03:24:59.976293 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-13 03:24:59.976303 | orchestrator | Friday 13 February 2026 03:24:54 +0000 (0:00:01.066) 0:00:02.545 ******* 2026-02-13 03:24:59.976315 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-13 03:24:59.976326 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-13 03:24:59.976337 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-13 03:24:59.976347 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-13 03:24:59.976358 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-13 03:24:59.976369 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-13 03:24:59.976379 | orchestrator | 2026-02-13 03:24:59.976390 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-13 03:24:59.976401 | orchestrator | Friday 13 February 2026 03:24:55 +0000 (0:00:01.153) 0:00:03.699 ******* 2026-02-13 03:24:59.976412 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-13 03:24:59.976422 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-13 03:24:59.976433 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-13 03:24:59.976443 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-13 03:24:59.976454 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-13 03:24:59.976464 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-13 03:24:59.976475 | orchestrator | 2026-02-13 03:24:59.976485 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-13 03:24:59.976496 | orchestrator | Friday 13 February 2026 03:24:56 +0000 (0:00:01.492) 0:00:05.191 ******* 2026-02-13 03:24:59.976506 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-13 03:24:59.976517 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:24:59.976529 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-13 03:24:59.976539 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:24:59.976550 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-13 03:24:59.976560 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:24:59.976571 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-13 03:24:59.976582 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:24:59.976592 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-13 03:24:59.976603 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:24:59.976613 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-13 03:24:59.976624 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:24:59.976635 | orchestrator | 2026-02-13 03:24:59.976646 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-13 03:24:59.976657 | orchestrator | Friday 13 February 2026 03:24:57 +0000 (0:00:01.142) 0:00:06.334 ******* 2026-02-13 03:24:59.976667 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:24:59.976678 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:24:59.976689 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:24:59.976699 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:24:59.976710 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:24:59.976720 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:24:59.976731 | orchestrator | 2026-02-13 03:24:59.976748 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-13 03:24:59.976778 | orchestrator | Friday 13 February 2026 03:24:58 +0000 (0:00:00.781) 0:00:07.116 ******* 2026-02-13 03:24:59.976828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:24:59.976854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:24:59.976875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:24:59.976941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:24:59.976960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:24:59.976981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.341871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.341979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342148 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342185 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342265 | orchestrator | 2026-02-13 03:25:02.342277 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-13 03:25:02.342288 | orchestrator | Friday 13 February 2026 03:25:00 +0000 (0:00:01.369) 0:00:08.485 ******* 2026-02-13 03:25:02.342299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342324 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:02.342413 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998522 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998563 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998588 | orchestrator | 2026-02-13 03:25:04.998595 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-13 03:25:04.998601 | orchestrator | Friday 13 February 2026 03:25:02 +0000 (0:00:02.350) 0:00:10.835 ******* 2026-02-13 03:25:04.998607 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:25:04.998614 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:25:04.998619 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:25:04.998624 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:25:04.998630 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:25:04.998635 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:25:04.998641 | orchestrator | 2026-02-13 03:25:04.998647 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-13 03:25:04.998653 | orchestrator | Friday 13 February 2026 03:25:03 +0000 (0:00:00.930) 0:00:11.765 ******* 2026-02-13 03:25:04.998659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:04.998697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317241 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317462 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 03:25:30.317472 | orchestrator | 2026-02-13 03:25:30.317485 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317496 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:01.737) 0:00:13.503 ******* 2026-02-13 03:25:30.317507 | orchestrator | 2026-02-13 03:25:30.317517 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317526 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:00.314) 0:00:13.818 ******* 2026-02-13 03:25:30.317544 | orchestrator | 2026-02-13 03:25:30.317553 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317563 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:00.133) 0:00:13.951 ******* 2026-02-13 03:25:30.317572 | orchestrator | 2026-02-13 03:25:30.317582 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317591 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:00.131) 0:00:14.083 ******* 2026-02-13 03:25:30.317601 | orchestrator | 2026-02-13 03:25:30.317610 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317620 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:00.129) 0:00:14.212 ******* 2026-02-13 03:25:30.317629 | orchestrator | 2026-02-13 03:25:30.317639 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 03:25:30.317648 | orchestrator | Friday 13 February 2026 03:25:05 +0000 (0:00:00.128) 0:00:14.341 ******* 2026-02-13 03:25:30.317658 | orchestrator | 2026-02-13 03:25:30.317667 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-13 03:25:30.317677 | orchestrator | Friday 13 February 2026 03:25:06 +0000 (0:00:00.127) 0:00:14.469 ******* 2026-02-13 03:25:30.317687 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:25:30.317698 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:25:30.317707 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:25:30.317717 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:25:30.317726 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:25:30.317735 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:25:30.317745 | orchestrator | 2026-02-13 03:25:30.317755 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-13 03:25:30.317767 | orchestrator | Friday 13 February 2026 03:25:14 +0000 (0:00:08.588) 0:00:23.057 ******* 2026-02-13 03:25:30.317779 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:25:30.317796 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:25:30.317808 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:25:30.317819 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:25:30.317830 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:25:30.317840 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:25:30.317852 | orchestrator | 2026-02-13 03:25:30.317864 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-13 03:25:30.317875 | orchestrator | Friday 13 February 2026 03:25:15 +0000 (0:00:01.088) 0:00:24.145 ******* 2026-02-13 03:25:30.317886 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:25:30.317897 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:25:30.317908 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:25:30.317919 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:25:30.317930 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:25:30.317940 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:25:30.317951 | orchestrator | 2026-02-13 03:25:30.317962 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-13 03:25:30.317974 | orchestrator | Friday 13 February 2026 03:25:23 +0000 (0:00:08.059) 0:00:32.205 ******* 2026-02-13 03:25:30.317984 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-13 03:25:30.317996 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-13 03:25:30.318086 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-13 03:25:30.318099 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-13 03:25:30.318110 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-13 03:25:30.318121 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-13 03:25:30.318132 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-13 03:25:30.318158 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-13 03:25:43.463093 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-13 03:25:43.463226 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-13 03:25:43.463252 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-13 03:25:43.463271 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-13 03:25:43.463290 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463309 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463327 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463345 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463365 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463384 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 03:25:43.463405 | orchestrator | 2026-02-13 03:25:43.463427 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-13 03:25:43.463446 | orchestrator | Friday 13 February 2026 03:25:30 +0000 (0:00:06.516) 0:00:38.722 ******* 2026-02-13 03:25:43.463466 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-13 03:25:43.463486 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:25:43.463507 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-13 03:25:43.463525 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:25:43.463543 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-13 03:25:43.463557 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:25:43.463570 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-13 03:25:43.463584 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-13 03:25:43.463596 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-13 03:25:43.463609 | orchestrator | 2026-02-13 03:25:43.463623 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-13 03:25:43.463651 | orchestrator | Friday 13 February 2026 03:25:32 +0000 (0:00:02.449) 0:00:41.171 ******* 2026-02-13 03:25:43.463663 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-13 03:25:43.463676 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:25:43.463689 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-13 03:25:43.463702 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:25:43.463714 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-13 03:25:43.463727 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:25:43.463739 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-13 03:25:43.463752 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-13 03:25:43.463782 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-13 03:25:43.463794 | orchestrator | 2026-02-13 03:25:43.463805 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-13 03:25:43.463850 | orchestrator | Friday 13 February 2026 03:25:35 +0000 (0:00:03.253) 0:00:44.425 ******* 2026-02-13 03:25:43.463862 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:25:43.463873 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:25:43.463907 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:25:43.463919 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:25:43.463930 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:25:43.463940 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:25:43.463951 | orchestrator | 2026-02-13 03:25:43.463962 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:25:43.463975 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 03:25:43.463987 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 03:25:43.463998 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 03:25:43.464072 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:25:43.464085 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:25:43.464096 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:25:43.464106 | orchestrator | 2026-02-13 03:25:43.464117 | orchestrator | 2026-02-13 03:25:43.464128 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:25:43.464139 | orchestrator | Friday 13 February 2026 03:25:43 +0000 (0:00:07.079) 0:00:51.504 ******* 2026-02-13 03:25:43.464171 | orchestrator | =============================================================================== 2026-02-13 03:25:43.464183 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.14s 2026-02-13 03:25:43.464194 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.59s 2026-02-13 03:25:43.464205 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.52s 2026-02-13 03:25:43.464215 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.25s 2026-02-13 03:25:43.464226 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.45s 2026-02-13 03:25:43.464237 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.35s 2026-02-13 03:25:43.464247 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.74s 2026-02-13 03:25:43.464258 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.49s 2026-02-13 03:25:43.464269 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.37s 2026-02-13 03:25:43.464280 | orchestrator | module-load : Load modules ---------------------------------------------- 1.15s 2026-02-13 03:25:43.464290 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.14s 2026-02-13 03:25:43.464301 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.09s 2026-02-13 03:25:43.464312 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.07s 2026-02-13 03:25:43.464323 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.97s 2026-02-13 03:25:43.464333 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 0.93s 2026-02-13 03:25:43.464344 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.78s 2026-02-13 03:25:43.464355 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.68s 2026-02-13 03:25:43.464366 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-13 03:25:45.740596 | orchestrator | 2026-02-13 03:25:45 | INFO  | Task 08408c5e-a0cf-452c-9609-5720aa9d9c73 (ovn) was prepared for execution. 2026-02-13 03:25:45.740699 | orchestrator | 2026-02-13 03:25:45 | INFO  | It takes a moment until task 08408c5e-a0cf-452c-9609-5720aa9d9c73 (ovn) has been started and output is visible here. 2026-02-13 03:25:56.255996 | orchestrator | 2026-02-13 03:25:56.256199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:25:56.256229 | orchestrator | 2026-02-13 03:25:56.256250 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:25:56.256268 | orchestrator | Friday 13 February 2026 03:25:49 +0000 (0:00:00.160) 0:00:00.160 ******* 2026-02-13 03:25:56.256288 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:25:56.256307 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:25:56.256326 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:25:56.256338 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:25:56.256348 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:25:56.256359 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:25:56.256370 | orchestrator | 2026-02-13 03:25:56.256381 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:25:56.256392 | orchestrator | Friday 13 February 2026 03:25:50 +0000 (0:00:00.699) 0:00:00.859 ******* 2026-02-13 03:25:56.256419 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-13 03:25:56.256431 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-13 03:25:56.256441 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-13 03:25:56.256450 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-13 03:25:56.256460 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-13 03:25:56.256469 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-13 03:25:56.256479 | orchestrator | 2026-02-13 03:25:56.256489 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-13 03:25:56.256499 | orchestrator | 2026-02-13 03:25:56.256509 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-13 03:25:56.256519 | orchestrator | Friday 13 February 2026 03:25:51 +0000 (0:00:00.798) 0:00:01.657 ******* 2026-02-13 03:25:56.256532 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:25:56.256545 | orchestrator | 2026-02-13 03:25:56.256556 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-13 03:25:56.256567 | orchestrator | Friday 13 February 2026 03:25:52 +0000 (0:00:01.062) 0:00:02.720 ******* 2026-02-13 03:25:56.256581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256692 | orchestrator | 2026-02-13 03:25:56.256704 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-13 03:25:56.256715 | orchestrator | Friday 13 February 2026 03:25:53 +0000 (0:00:01.175) 0:00:03.895 ******* 2026-02-13 03:25:56.256732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256807 | orchestrator | 2026-02-13 03:25:56.256819 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-13 03:25:56.256831 | orchestrator | Friday 13 February 2026 03:25:55 +0000 (0:00:01.579) 0:00:05.474 ******* 2026-02-13 03:25:56.256843 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:25:56.256871 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.136882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137087 | orchestrator | 2026-02-13 03:26:21.137106 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-13 03:26:21.137123 | orchestrator | Friday 13 February 2026 03:25:56 +0000 (0:00:01.127) 0:00:06.602 ******* 2026-02-13 03:26:21.137140 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137157 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137280 | orchestrator | 2026-02-13 03:26:21.137295 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-13 03:26:21.137309 | orchestrator | Friday 13 February 2026 03:25:57 +0000 (0:00:01.525) 0:00:08.127 ******* 2026-02-13 03:26:21.137335 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:26:21.137441 | orchestrator | 2026-02-13 03:26:21.137457 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-13 03:26:21.137473 | orchestrator | Friday 13 February 2026 03:25:59 +0000 (0:00:01.347) 0:00:09.474 ******* 2026-02-13 03:26:21.137489 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:26:21.137506 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:26:21.137520 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:26:21.137534 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:26:21.137549 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:26:21.137563 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:26:21.137577 | orchestrator | 2026-02-13 03:26:21.137591 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-13 03:26:21.137606 | orchestrator | Friday 13 February 2026 03:26:01 +0000 (0:00:02.574) 0:00:12.049 ******* 2026-02-13 03:26:21.137620 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-13 03:26:21.137635 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-13 03:26:21.137649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-13 03:26:21.137663 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-13 03:26:21.137676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-13 03:26:21.137691 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-13 03:26:21.137719 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168277 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168366 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168389 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168396 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168402 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 03:27:00.168410 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168418 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168447 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168453 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-13 03:27:00.168466 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168474 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168480 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168486 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168493 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168499 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 03:27:00.168505 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168511 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168518 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168524 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168530 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168536 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 03:27:00.168542 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168549 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168555 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168567 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168573 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 03:27:00.168579 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 03:27:00.168586 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 03:27:00.168592 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 03:27:00.168598 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 03:27:00.168605 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 03:27:00.168611 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-13 03:27:00.168618 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 03:27:00.168641 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-13 03:27:00.168648 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-13 03:27:00.168659 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-13 03:27:00.168665 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-13 03:27:00.168671 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 03:27:00.168677 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-13 03:27:00.168684 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 03:27:00.168690 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 03:27:00.168696 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 03:27:00.168702 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 03:27:00.168709 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 03:27:00.168715 | orchestrator | 2026-02-13 03:27:00.168722 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168729 | orchestrator | Friday 13 February 2026 03:26:20 +0000 (0:00:18.889) 0:00:30.938 ******* 2026-02-13 03:27:00.168735 | orchestrator | 2026-02-13 03:27:00.168742 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168748 | orchestrator | Friday 13 February 2026 03:26:20 +0000 (0:00:00.219) 0:00:31.158 ******* 2026-02-13 03:27:00.168754 | orchestrator | 2026-02-13 03:27:00.168760 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168767 | orchestrator | Friday 13 February 2026 03:26:20 +0000 (0:00:00.062) 0:00:31.220 ******* 2026-02-13 03:27:00.168773 | orchestrator | 2026-02-13 03:27:00.168779 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168785 | orchestrator | Friday 13 February 2026 03:26:20 +0000 (0:00:00.061) 0:00:31.282 ******* 2026-02-13 03:27:00.168791 | orchestrator | 2026-02-13 03:27:00.168798 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168804 | orchestrator | Friday 13 February 2026 03:26:20 +0000 (0:00:00.066) 0:00:31.349 ******* 2026-02-13 03:27:00.168810 | orchestrator | 2026-02-13 03:27:00.168816 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 03:27:00.168822 | orchestrator | Friday 13 February 2026 03:26:21 +0000 (0:00:00.063) 0:00:31.412 ******* 2026-02-13 03:27:00.168828 | orchestrator | 2026-02-13 03:27:00.168835 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-13 03:27:00.168841 | orchestrator | Friday 13 February 2026 03:26:21 +0000 (0:00:00.062) 0:00:31.475 ******* 2026-02-13 03:27:00.168847 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:27:00.168855 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:27:00.168862 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:27:00.168868 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:00.168874 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:00.168880 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:00.168886 | orchestrator | 2026-02-13 03:27:00.168893 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-13 03:27:00.168899 | orchestrator | Friday 13 February 2026 03:26:22 +0000 (0:00:01.565) 0:00:33.041 ******* 2026-02-13 03:27:00.168912 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:00.168918 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:27:00.168924 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:27:00.168931 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:27:00.168937 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:27:00.168943 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:27:00.168949 | orchestrator | 2026-02-13 03:27:00.168955 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-13 03:27:00.168961 | orchestrator | 2026-02-13 03:27:00.168968 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 03:27:00.168974 | orchestrator | Friday 13 February 2026 03:26:58 +0000 (0:00:35.362) 0:01:08.403 ******* 2026-02-13 03:27:00.168980 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:27:00.168986 | orchestrator | 2026-02-13 03:27:00.168993 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 03:27:00.168999 | orchestrator | Friday 13 February 2026 03:26:58 +0000 (0:00:00.687) 0:01:09.091 ******* 2026-02-13 03:27:00.169005 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:27:00.169011 | orchestrator | 2026-02-13 03:27:00.169018 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-13 03:27:00.169024 | orchestrator | Friday 13 February 2026 03:26:59 +0000 (0:00:00.501) 0:01:09.593 ******* 2026-02-13 03:27:00.169030 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:00.169055 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:00.169062 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:00.169069 | orchestrator | 2026-02-13 03:27:00.169075 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-13 03:27:00.169085 | orchestrator | Friday 13 February 2026 03:27:00 +0000 (0:00:00.917) 0:01:10.511 ******* 2026-02-13 03:27:10.889433 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.889565 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.889585 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.889600 | orchestrator | 2026-02-13 03:27:10.889618 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-13 03:27:10.889652 | orchestrator | Friday 13 February 2026 03:27:00 +0000 (0:00:00.316) 0:01:10.827 ******* 2026-02-13 03:27:10.889667 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.889676 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.889685 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.889693 | orchestrator | 2026-02-13 03:27:10.889702 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-13 03:27:10.889711 | orchestrator | Friday 13 February 2026 03:27:00 +0000 (0:00:00.305) 0:01:11.133 ******* 2026-02-13 03:27:10.889720 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.889729 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.889737 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.889746 | orchestrator | 2026-02-13 03:27:10.889755 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-13 03:27:10.889763 | orchestrator | Friday 13 February 2026 03:27:01 +0000 (0:00:00.305) 0:01:11.439 ******* 2026-02-13 03:27:10.889772 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.889780 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.889789 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.889797 | orchestrator | 2026-02-13 03:27:10.889806 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-13 03:27:10.889815 | orchestrator | Friday 13 February 2026 03:27:01 +0000 (0:00:00.484) 0:01:11.923 ******* 2026-02-13 03:27:10.889824 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.889833 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.889842 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.889851 | orchestrator | 2026-02-13 03:27:10.889860 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-13 03:27:10.889888 | orchestrator | Friday 13 February 2026 03:27:01 +0000 (0:00:00.292) 0:01:12.216 ******* 2026-02-13 03:27:10.889897 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.889906 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.889914 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.889923 | orchestrator | 2026-02-13 03:27:10.889931 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-13 03:27:10.889940 | orchestrator | Friday 13 February 2026 03:27:02 +0000 (0:00:00.310) 0:01:12.526 ******* 2026-02-13 03:27:10.889948 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.889957 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.889966 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.889974 | orchestrator | 2026-02-13 03:27:10.889984 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-13 03:27:10.889995 | orchestrator | Friday 13 February 2026 03:27:02 +0000 (0:00:00.276) 0:01:12.802 ******* 2026-02-13 03:27:10.890004 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890083 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890103 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890119 | orchestrator | 2026-02-13 03:27:10.890135 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-13 03:27:10.890151 | orchestrator | Friday 13 February 2026 03:27:02 +0000 (0:00:00.282) 0:01:13.085 ******* 2026-02-13 03:27:10.890164 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890175 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890185 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890195 | orchestrator | 2026-02-13 03:27:10.890205 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-13 03:27:10.890215 | orchestrator | Friday 13 February 2026 03:27:03 +0000 (0:00:00.447) 0:01:13.533 ******* 2026-02-13 03:27:10.890225 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890234 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890246 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890266 | orchestrator | 2026-02-13 03:27:10.890276 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-13 03:27:10.890286 | orchestrator | Friday 13 February 2026 03:27:03 +0000 (0:00:00.290) 0:01:13.824 ******* 2026-02-13 03:27:10.890296 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890307 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890316 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890326 | orchestrator | 2026-02-13 03:27:10.890336 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-13 03:27:10.890346 | orchestrator | Friday 13 February 2026 03:27:03 +0000 (0:00:00.302) 0:01:14.126 ******* 2026-02-13 03:27:10.890355 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890363 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890372 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890380 | orchestrator | 2026-02-13 03:27:10.890389 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-13 03:27:10.890397 | orchestrator | Friday 13 February 2026 03:27:04 +0000 (0:00:00.282) 0:01:14.409 ******* 2026-02-13 03:27:10.890406 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890414 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890423 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890431 | orchestrator | 2026-02-13 03:27:10.890440 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-13 03:27:10.890448 | orchestrator | Friday 13 February 2026 03:27:04 +0000 (0:00:00.496) 0:01:14.905 ******* 2026-02-13 03:27:10.890457 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890465 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890474 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890483 | orchestrator | 2026-02-13 03:27:10.890492 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-13 03:27:10.890509 | orchestrator | Friday 13 February 2026 03:27:04 +0000 (0:00:00.276) 0:01:15.181 ******* 2026-02-13 03:27:10.890517 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890526 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890535 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890543 | orchestrator | 2026-02-13 03:27:10.890552 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-13 03:27:10.890561 | orchestrator | Friday 13 February 2026 03:27:05 +0000 (0:00:00.329) 0:01:15.511 ******* 2026-02-13 03:27:10.890588 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890597 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890606 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890614 | orchestrator | 2026-02-13 03:27:10.890623 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 03:27:10.890637 | orchestrator | Friday 13 February 2026 03:27:05 +0000 (0:00:00.356) 0:01:15.868 ******* 2026-02-13 03:27:10.890646 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:27:10.890655 | orchestrator | 2026-02-13 03:27:10.890664 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-13 03:27:10.890672 | orchestrator | Friday 13 February 2026 03:27:06 +0000 (0:00:00.727) 0:01:16.595 ******* 2026-02-13 03:27:10.890681 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.890690 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.890698 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.890707 | orchestrator | 2026-02-13 03:27:10.890715 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-13 03:27:10.890724 | orchestrator | Friday 13 February 2026 03:27:06 +0000 (0:00:00.404) 0:01:17.000 ******* 2026-02-13 03:27:10.890733 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:10.890741 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:10.890750 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:10.890758 | orchestrator | 2026-02-13 03:27:10.890767 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-13 03:27:10.890775 | orchestrator | Friday 13 February 2026 03:27:07 +0000 (0:00:00.414) 0:01:17.414 ******* 2026-02-13 03:27:10.890784 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890792 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890801 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890810 | orchestrator | 2026-02-13 03:27:10.890818 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-13 03:27:10.890827 | orchestrator | Friday 13 February 2026 03:27:07 +0000 (0:00:00.311) 0:01:17.726 ******* 2026-02-13 03:27:10.890835 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890844 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890852 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890861 | orchestrator | 2026-02-13 03:27:10.890869 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-13 03:27:10.890878 | orchestrator | Friday 13 February 2026 03:27:07 +0000 (0:00:00.527) 0:01:18.254 ******* 2026-02-13 03:27:10.890887 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890895 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890904 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890912 | orchestrator | 2026-02-13 03:27:10.890921 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-13 03:27:10.890929 | orchestrator | Friday 13 February 2026 03:27:08 +0000 (0:00:00.362) 0:01:18.616 ******* 2026-02-13 03:27:10.890938 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.890946 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.890955 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.890963 | orchestrator | 2026-02-13 03:27:10.890972 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-13 03:27:10.890980 | orchestrator | Friday 13 February 2026 03:27:08 +0000 (0:00:00.316) 0:01:18.933 ******* 2026-02-13 03:27:10.890998 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.891007 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.891016 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.891024 | orchestrator | 2026-02-13 03:27:10.891033 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-13 03:27:10.891062 | orchestrator | Friday 13 February 2026 03:27:08 +0000 (0:00:00.332) 0:01:19.265 ******* 2026-02-13 03:27:10.891077 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:10.891086 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:10.891095 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:10.891104 | orchestrator | 2026-02-13 03:27:10.891112 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-13 03:27:10.891121 | orchestrator | Friday 13 February 2026 03:27:09 +0000 (0:00:00.500) 0:01:19.765 ******* 2026-02-13 03:27:10.891132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:10.891143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:10.891152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:10.891172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141882 | orchestrator | 2026-02-13 03:27:17.141894 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-13 03:27:17.141906 | orchestrator | Friday 13 February 2026 03:27:10 +0000 (0:00:01.469) 0:01:21.235 ******* 2026-02-13 03:27:17.141917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.141990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142180 | orchestrator | 2026-02-13 03:27:17.142190 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-13 03:27:17.142200 | orchestrator | Friday 13 February 2026 03:27:14 +0000 (0:00:03.799) 0:01:25.034 ******* 2026-02-13 03:27:17.142212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:17.142285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.531667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.531797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.531811 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.531821 | orchestrator | 2026-02-13 03:27:41.531833 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:41.531844 | orchestrator | Friday 13 February 2026 03:27:16 +0000 (0:00:02.080) 0:01:27.115 ******* 2026-02-13 03:27:41.531852 | orchestrator | 2026-02-13 03:27:41.531861 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:41.531870 | orchestrator | Friday 13 February 2026 03:27:16 +0000 (0:00:00.079) 0:01:27.194 ******* 2026-02-13 03:27:41.531878 | orchestrator | 2026-02-13 03:27:41.531886 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:41.531895 | orchestrator | Friday 13 February 2026 03:27:17 +0000 (0:00:00.226) 0:01:27.420 ******* 2026-02-13 03:27:41.531903 | orchestrator | 2026-02-13 03:27:41.531912 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-13 03:27:41.531920 | orchestrator | Friday 13 February 2026 03:27:17 +0000 (0:00:00.064) 0:01:27.485 ******* 2026-02-13 03:27:41.531929 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:41.531939 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:27:41.531947 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:27:41.531956 | orchestrator | 2026-02-13 03:27:41.531964 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-13 03:27:41.531973 | orchestrator | Friday 13 February 2026 03:27:24 +0000 (0:00:07.602) 0:01:35.088 ******* 2026-02-13 03:27:41.531981 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:41.531990 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:27:41.531998 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:27:41.532006 | orchestrator | 2026-02-13 03:27:41.532015 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-13 03:27:41.532023 | orchestrator | Friday 13 February 2026 03:27:32 +0000 (0:00:07.522) 0:01:42.610 ******* 2026-02-13 03:27:41.532032 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:41.532040 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:27:41.532049 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:27:41.532081 | orchestrator | 2026-02-13 03:27:41.532090 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-13 03:27:41.532099 | orchestrator | Friday 13 February 2026 03:27:34 +0000 (0:00:02.498) 0:01:45.108 ******* 2026-02-13 03:27:41.532107 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:27:41.532116 | orchestrator | 2026-02-13 03:27:41.532125 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-13 03:27:41.532133 | orchestrator | Friday 13 February 2026 03:27:34 +0000 (0:00:00.121) 0:01:45.229 ******* 2026-02-13 03:27:41.532142 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:41.532153 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:41.532163 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:41.532173 | orchestrator | 2026-02-13 03:27:41.532183 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-13 03:27:41.532193 | orchestrator | Friday 13 February 2026 03:27:35 +0000 (0:00:01.012) 0:01:46.241 ******* 2026-02-13 03:27:41.532203 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:41.532220 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:41.532231 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:41.532241 | orchestrator | 2026-02-13 03:27:41.532251 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-13 03:27:41.532261 | orchestrator | Friday 13 February 2026 03:27:36 +0000 (0:00:00.618) 0:01:46.860 ******* 2026-02-13 03:27:41.532271 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:41.532281 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:41.532291 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:41.532301 | orchestrator | 2026-02-13 03:27:41.532310 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-13 03:27:41.532334 | orchestrator | Friday 13 February 2026 03:27:37 +0000 (0:00:00.767) 0:01:47.627 ******* 2026-02-13 03:27:41.532344 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:27:41.532353 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:27:41.532363 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:27:41.532373 | orchestrator | 2026-02-13 03:27:41.532383 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-13 03:27:41.532393 | orchestrator | Friday 13 February 2026 03:27:37 +0000 (0:00:00.606) 0:01:48.234 ******* 2026-02-13 03:27:41.532403 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:41.532411 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:41.532437 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:41.532447 | orchestrator | 2026-02-13 03:27:41.532456 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-13 03:27:41.532464 | orchestrator | Friday 13 February 2026 03:27:39 +0000 (0:00:01.185) 0:01:49.420 ******* 2026-02-13 03:27:41.532473 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:41.532481 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:41.532490 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:41.532498 | orchestrator | 2026-02-13 03:27:41.532507 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-13 03:27:41.532516 | orchestrator | Friday 13 February 2026 03:27:39 +0000 (0:00:00.721) 0:01:50.142 ******* 2026-02-13 03:27:41.532524 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:27:41.532543 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:27:41.532552 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:27:41.532560 | orchestrator | 2026-02-13 03:27:41.532569 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-13 03:27:41.532577 | orchestrator | Friday 13 February 2026 03:27:40 +0000 (0:00:00.301) 0:01:50.443 ******* 2026-02-13 03:27:41.532588 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532598 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532606 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532615 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532630 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532639 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532655 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532682 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:41.532715 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689317 | orchestrator | 2026-02-13 03:27:48.689429 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-13 03:27:48.689445 | orchestrator | Friday 13 February 2026 03:27:41 +0000 (0:00:01.432) 0:01:51.875 ******* 2026-02-13 03:27:48.689459 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689474 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689486 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689497 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689557 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689607 | orchestrator | 2026-02-13 03:27:48.689618 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-13 03:27:48.689629 | orchestrator | Friday 13 February 2026 03:27:45 +0000 (0:00:03.843) 0:01:55.719 ******* 2026-02-13 03:27:48.689658 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689670 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689681 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689692 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689734 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 03:27:48.689772 | orchestrator | 2026-02-13 03:27:48.689783 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:48.689794 | orchestrator | Friday 13 February 2026 03:27:48 +0000 (0:00:03.113) 0:01:58.833 ******* 2026-02-13 03:27:48.689805 | orchestrator | 2026-02-13 03:27:48.689816 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:48.689827 | orchestrator | Friday 13 February 2026 03:27:48 +0000 (0:00:00.062) 0:01:58.895 ******* 2026-02-13 03:27:48.689837 | orchestrator | 2026-02-13 03:27:48.689848 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 03:27:48.689861 | orchestrator | Friday 13 February 2026 03:27:48 +0000 (0:00:00.068) 0:01:58.964 ******* 2026-02-13 03:27:48.689873 | orchestrator | 2026-02-13 03:27:48.689892 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-13 03:28:12.776239 | orchestrator | Friday 13 February 2026 03:27:48 +0000 (0:00:00.065) 0:01:59.029 ******* 2026-02-13 03:28:12.776353 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:28:12.776369 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:28:12.776381 | orchestrator | 2026-02-13 03:28:12.776393 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-13 03:28:12.776405 | orchestrator | Friday 13 February 2026 03:27:54 +0000 (0:00:06.223) 0:02:05.253 ******* 2026-02-13 03:28:12.776416 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:28:12.776427 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:28:12.776438 | orchestrator | 2026-02-13 03:28:12.776449 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-13 03:28:12.776485 | orchestrator | Friday 13 February 2026 03:28:01 +0000 (0:00:06.176) 0:02:11.430 ******* 2026-02-13 03:28:12.776497 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:28:12.776508 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:28:12.776518 | orchestrator | 2026-02-13 03:28:12.776529 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-13 03:28:12.776540 | orchestrator | Friday 13 February 2026 03:28:07 +0000 (0:00:06.187) 0:02:17.617 ******* 2026-02-13 03:28:12.776551 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:28:12.776562 | orchestrator | 2026-02-13 03:28:12.776573 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-13 03:28:12.776583 | orchestrator | Friday 13 February 2026 03:28:07 +0000 (0:00:00.127) 0:02:17.744 ******* 2026-02-13 03:28:12.776594 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:28:12.776606 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:28:12.776616 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:28:12.776627 | orchestrator | 2026-02-13 03:28:12.776656 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-13 03:28:12.776677 | orchestrator | Friday 13 February 2026 03:28:08 +0000 (0:00:00.979) 0:02:18.724 ******* 2026-02-13 03:28:12.776688 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:28:12.776699 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:28:12.776712 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:28:12.776724 | orchestrator | 2026-02-13 03:28:12.776736 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-13 03:28:12.776749 | orchestrator | Friday 13 February 2026 03:28:08 +0000 (0:00:00.606) 0:02:19.331 ******* 2026-02-13 03:28:12.776762 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:28:12.776774 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:28:12.776787 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:28:12.776799 | orchestrator | 2026-02-13 03:28:12.776812 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-13 03:28:12.776824 | orchestrator | Friday 13 February 2026 03:28:09 +0000 (0:00:00.779) 0:02:20.110 ******* 2026-02-13 03:28:12.776837 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:28:12.776850 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:28:12.776862 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:28:12.776874 | orchestrator | 2026-02-13 03:28:12.776887 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-13 03:28:12.776899 | orchestrator | Friday 13 February 2026 03:28:10 +0000 (0:00:00.633) 0:02:20.743 ******* 2026-02-13 03:28:12.776912 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:28:12.776923 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:28:12.776936 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:28:12.776948 | orchestrator | 2026-02-13 03:28:12.776960 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-13 03:28:12.776973 | orchestrator | Friday 13 February 2026 03:28:11 +0000 (0:00:00.980) 0:02:21.724 ******* 2026-02-13 03:28:12.776985 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:28:12.776997 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:28:12.777009 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:28:12.777021 | orchestrator | 2026-02-13 03:28:12.777033 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:28:12.777047 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-13 03:28:12.777061 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-13 03:28:12.777097 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-13 03:28:12.777109 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:28:12.777129 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:28:12.777140 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:28:12.777151 | orchestrator | 2026-02-13 03:28:12.777162 | orchestrator | 2026-02-13 03:28:12.777201 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:28:12.777213 | orchestrator | Friday 13 February 2026 03:28:12 +0000 (0:00:00.926) 0:02:22.651 ******* 2026-02-13 03:28:12.777224 | orchestrator | =============================================================================== 2026-02-13 03:28:12.777235 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.36s 2026-02-13 03:28:12.777246 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.89s 2026-02-13 03:28:12.777256 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.83s 2026-02-13 03:28:12.777267 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.70s 2026-02-13 03:28:12.777278 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.69s 2026-02-13 03:28:12.777307 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.84s 2026-02-13 03:28:12.777318 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.80s 2026-02-13 03:28:12.777329 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.11s 2026-02-13 03:28:12.777340 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-02-13 03:28:12.777350 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.08s 2026-02-13 03:28:12.777361 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.58s 2026-02-13 03:28:12.777372 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.57s 2026-02-13 03:28:12.777383 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.53s 2026-02-13 03:28:12.777393 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.47s 2026-02-13 03:28:12.777404 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.43s 2026-02-13 03:28:12.777415 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.35s 2026-02-13 03:28:12.777425 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.19s 2026-02-13 03:28:12.777436 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.18s 2026-02-13 03:28:12.777447 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.13s 2026-02-13 03:28:12.777458 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.06s 2026-02-13 03:28:13.172331 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 03:28:13.172433 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-13 03:28:15.402917 | orchestrator | 2026-02-13 03:28:15 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-13 03:28:25.527032 | orchestrator | 2026-02-13 03:28:25 | INFO  | Task 68e5bed8-aafe-404d-9647-4f3c7b343719 (wipe-partitions) was prepared for execution. 2026-02-13 03:28:25.527162 | orchestrator | 2026-02-13 03:28:25 | INFO  | It takes a moment until task 68e5bed8-aafe-404d-9647-4f3c7b343719 (wipe-partitions) has been started and output is visible here. 2026-02-13 03:28:38.956626 | orchestrator | 2026-02-13 03:28:38.956737 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-13 03:28:38.956754 | orchestrator | 2026-02-13 03:28:38.956766 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-13 03:28:38.956778 | orchestrator | Friday 13 February 2026 03:28:29 +0000 (0:00:00.127) 0:00:00.127 ******* 2026-02-13 03:28:38.956814 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:28:38.956827 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:28:38.956838 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:28:38.956848 | orchestrator | 2026-02-13 03:28:38.956860 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-13 03:28:38.956870 | orchestrator | Friday 13 February 2026 03:28:30 +0000 (0:00:00.583) 0:00:00.711 ******* 2026-02-13 03:28:38.956881 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:28:38.956892 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:28:38.956903 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:28:38.956913 | orchestrator | 2026-02-13 03:28:38.956924 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-13 03:28:38.956935 | orchestrator | Friday 13 February 2026 03:28:30 +0000 (0:00:00.377) 0:00:01.088 ******* 2026-02-13 03:28:38.956946 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:28:38.956957 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:28:38.956968 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:28:38.956979 | orchestrator | 2026-02-13 03:28:38.956990 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-13 03:28:38.957000 | orchestrator | Friday 13 February 2026 03:28:31 +0000 (0:00:00.566) 0:00:01.654 ******* 2026-02-13 03:28:38.957011 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:28:38.957022 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:28:38.957034 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:28:38.957045 | orchestrator | 2026-02-13 03:28:38.957055 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-13 03:28:38.957066 | orchestrator | Friday 13 February 2026 03:28:31 +0000 (0:00:00.259) 0:00:01.913 ******* 2026-02-13 03:28:38.957077 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-13 03:28:38.957172 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-13 03:28:38.957186 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-13 03:28:38.957199 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-13 03:28:38.957212 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-13 03:28:38.957224 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-13 03:28:38.957252 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-13 03:28:38.957265 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-13 03:28:38.957277 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-13 03:28:38.957290 | orchestrator | 2026-02-13 03:28:38.957302 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-13 03:28:38.957315 | orchestrator | Friday 13 February 2026 03:28:32 +0000 (0:00:01.247) 0:00:03.161 ******* 2026-02-13 03:28:38.957327 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-13 03:28:38.957340 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-13 03:28:38.957352 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-13 03:28:38.957365 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-13 03:28:38.957377 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-13 03:28:38.957390 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-13 03:28:38.957403 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-13 03:28:38.957415 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-13 03:28:38.957427 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-13 03:28:38.957439 | orchestrator | 2026-02-13 03:28:38.957452 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-13 03:28:38.957464 | orchestrator | Friday 13 February 2026 03:28:34 +0000 (0:00:01.517) 0:00:04.678 ******* 2026-02-13 03:28:38.957477 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-13 03:28:38.957489 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-13 03:28:38.957501 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-13 03:28:38.957514 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-13 03:28:38.957535 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-13 03:28:38.957548 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-13 03:28:38.957567 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-13 03:28:38.957586 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-13 03:28:38.957604 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-13 03:28:38.957623 | orchestrator | 2026-02-13 03:28:38.957642 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-13 03:28:38.957659 | orchestrator | Friday 13 February 2026 03:28:37 +0000 (0:00:03.209) 0:00:07.888 ******* 2026-02-13 03:28:38.957676 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:28:38.957694 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:28:38.957711 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:28:38.957728 | orchestrator | 2026-02-13 03:28:38.957747 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-13 03:28:38.957768 | orchestrator | Friday 13 February 2026 03:28:37 +0000 (0:00:00.600) 0:00:08.489 ******* 2026-02-13 03:28:38.957785 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:28:38.957805 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:28:38.957822 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:28:38.957840 | orchestrator | 2026-02-13 03:28:38.957858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:28:38.957878 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:28:38.957899 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:28:38.957936 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:28:38.957948 | orchestrator | 2026-02-13 03:28:38.957959 | orchestrator | 2026-02-13 03:28:38.957975 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:28:38.957993 | orchestrator | Friday 13 February 2026 03:28:38 +0000 (0:00:00.643) 0:00:09.132 ******* 2026-02-13 03:28:38.958011 | orchestrator | =============================================================================== 2026-02-13 03:28:38.958076 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.21s 2026-02-13 03:28:38.958113 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.52s 2026-02-13 03:28:38.958124 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2026-02-13 03:28:38.958135 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-02-13 03:28:38.958146 | orchestrator | Reload udev rules ------------------------------------------------------- 0.60s 2026-02-13 03:28:38.958157 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-02-13 03:28:38.958167 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.57s 2026-02-13 03:28:38.958178 | orchestrator | Remove all rook related logical devices --------------------------------- 0.38s 2026-02-13 03:28:38.958189 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-02-13 03:28:51.301076 | orchestrator | 2026-02-13 03:28:51 | INFO  | Task 7cf3b5de-784b-4fa3-996b-3689554e15a7 (facts) was prepared for execution. 2026-02-13 03:28:51.301218 | orchestrator | 2026-02-13 03:28:51 | INFO  | It takes a moment until task 7cf3b5de-784b-4fa3-996b-3689554e15a7 (facts) has been started and output is visible here. 2026-02-13 03:29:04.112038 | orchestrator | 2026-02-13 03:29:04.112225 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-13 03:29:04.112245 | orchestrator | 2026-02-13 03:29:04.112258 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-13 03:29:04.112270 | orchestrator | Friday 13 February 2026 03:28:55 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-13 03:29:04.112310 | orchestrator | ok: [testbed-manager] 2026-02-13 03:29:04.112322 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:29:04.112349 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:29:04.112370 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:29:04.112381 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:04.112392 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:04.112402 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:04.112413 | orchestrator | 2026-02-13 03:29:04.112424 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-13 03:29:04.112436 | orchestrator | Friday 13 February 2026 03:28:56 +0000 (0:00:01.120) 0:00:01.382 ******* 2026-02-13 03:29:04.112447 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:29:04.112459 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:29:04.112470 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:29:04.112480 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:29:04.112491 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:04.112502 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:04.112512 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:04.112523 | orchestrator | 2026-02-13 03:29:04.112534 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 03:29:04.112545 | orchestrator | 2026-02-13 03:29:04.112556 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 03:29:04.112567 | orchestrator | Friday 13 February 2026 03:28:57 +0000 (0:00:01.229) 0:00:02.612 ******* 2026-02-13 03:29:04.112578 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:29:04.112590 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:29:04.112602 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:29:04.112614 | orchestrator | ok: [testbed-manager] 2026-02-13 03:29:04.112627 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:04.112639 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:04.112651 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:04.112663 | orchestrator | 2026-02-13 03:29:04.112676 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-13 03:29:04.112688 | orchestrator | 2026-02-13 03:29:04.112701 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-13 03:29:04.112714 | orchestrator | Friday 13 February 2026 03:29:03 +0000 (0:00:05.256) 0:00:07.869 ******* 2026-02-13 03:29:04.112726 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:29:04.112739 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:29:04.112751 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:29:04.112763 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:29:04.112775 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:04.112788 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:04.112800 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:04.112813 | orchestrator | 2026-02-13 03:29:04.112826 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:29:04.112839 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112902 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112915 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112926 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112937 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112948 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112967 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:29:04.112978 | orchestrator | 2026-02-13 03:29:04.112989 | orchestrator | 2026-02-13 03:29:04.113000 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:29:04.113011 | orchestrator | Friday 13 February 2026 03:29:03 +0000 (0:00:00.587) 0:00:08.456 ******* 2026-02-13 03:29:04.113031 | orchestrator | =============================================================================== 2026-02-13 03:29:04.113049 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.26s 2026-02-13 03:29:04.113066 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-02-13 03:29:04.113083 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-02-13 03:29:04.113133 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-02-13 03:29:06.444567 | orchestrator | 2026-02-13 03:29:06 | INFO  | Task b975ffe5-b11d-47c6-af78-d5062f7af8e8 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-13 03:29:06.444663 | orchestrator | 2026-02-13 03:29:06 | INFO  | It takes a moment until task b975ffe5-b11d-47c6-af78-d5062f7af8e8 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-13 03:29:18.270335 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 03:29:18.270486 | orchestrator | 2.16.14 2026-02-13 03:29:18.270511 | orchestrator | 2026-02-13 03:29:18.270524 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-13 03:29:18.270537 | orchestrator | 2026-02-13 03:29:18.270548 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:29:18.270560 | orchestrator | Friday 13 February 2026 03:29:10 +0000 (0:00:00.320) 0:00:00.320 ******* 2026-02-13 03:29:18.270572 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:18.270583 | orchestrator | 2026-02-13 03:29:18.270611 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:29:18.270622 | orchestrator | Friday 13 February 2026 03:29:11 +0000 (0:00:00.247) 0:00:00.568 ******* 2026-02-13 03:29:18.270633 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:18.270644 | orchestrator | 2026-02-13 03:29:18.270655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.270666 | orchestrator | Friday 13 February 2026 03:29:11 +0000 (0:00:00.222) 0:00:00.790 ******* 2026-02-13 03:29:18.270677 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-13 03:29:18.270688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-13 03:29:18.270699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-13 03:29:18.270709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-13 03:29:18.270720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-13 03:29:18.270731 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-13 03:29:18.270741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-13 03:29:18.270752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-13 03:29:18.270763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-13 03:29:18.270774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-13 03:29:18.270785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-13 03:29:18.270795 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-13 03:29:18.270831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-13 03:29:18.270842 | orchestrator | 2026-02-13 03:29:18.270855 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.270868 | orchestrator | Friday 13 February 2026 03:29:11 +0000 (0:00:00.472) 0:00:01.262 ******* 2026-02-13 03:29:18.270881 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.270894 | orchestrator | 2026-02-13 03:29:18.270907 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.270919 | orchestrator | Friday 13 February 2026 03:29:11 +0000 (0:00:00.201) 0:00:01.464 ******* 2026-02-13 03:29:18.270932 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.270944 | orchestrator | 2026-02-13 03:29:18.270956 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.270970 | orchestrator | Friday 13 February 2026 03:29:12 +0000 (0:00:00.190) 0:00:01.654 ******* 2026-02-13 03:29:18.270982 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.270994 | orchestrator | 2026-02-13 03:29:18.271007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271019 | orchestrator | Friday 13 February 2026 03:29:12 +0000 (0:00:00.215) 0:00:01.870 ******* 2026-02-13 03:29:18.271032 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271045 | orchestrator | 2026-02-13 03:29:18.271057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271067 | orchestrator | Friday 13 February 2026 03:29:12 +0000 (0:00:00.196) 0:00:02.066 ******* 2026-02-13 03:29:18.271078 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271089 | orchestrator | 2026-02-13 03:29:18.271163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271176 | orchestrator | Friday 13 February 2026 03:29:12 +0000 (0:00:00.208) 0:00:02.274 ******* 2026-02-13 03:29:18.271188 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271198 | orchestrator | 2026-02-13 03:29:18.271209 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271220 | orchestrator | Friday 13 February 2026 03:29:12 +0000 (0:00:00.213) 0:00:02.488 ******* 2026-02-13 03:29:18.271231 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271242 | orchestrator | 2026-02-13 03:29:18.271252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271263 | orchestrator | Friday 13 February 2026 03:29:13 +0000 (0:00:00.210) 0:00:02.698 ******* 2026-02-13 03:29:18.271274 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271285 | orchestrator | 2026-02-13 03:29:18.271295 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271306 | orchestrator | Friday 13 February 2026 03:29:13 +0000 (0:00:00.196) 0:00:02.895 ******* 2026-02-13 03:29:18.271317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d) 2026-02-13 03:29:18.271330 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d) 2026-02-13 03:29:18.271341 | orchestrator | 2026-02-13 03:29:18.271352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271384 | orchestrator | Friday 13 February 2026 03:29:13 +0000 (0:00:00.413) 0:00:03.308 ******* 2026-02-13 03:29:18.271395 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165) 2026-02-13 03:29:18.271406 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165) 2026-02-13 03:29:18.271417 | orchestrator | 2026-02-13 03:29:18.271428 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271439 | orchestrator | Friday 13 February 2026 03:29:14 +0000 (0:00:00.679) 0:00:03.988 ******* 2026-02-13 03:29:18.271456 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322) 2026-02-13 03:29:18.271476 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322) 2026-02-13 03:29:18.271487 | orchestrator | 2026-02-13 03:29:18.271497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271508 | orchestrator | Friday 13 February 2026 03:29:15 +0000 (0:00:00.673) 0:00:04.662 ******* 2026-02-13 03:29:18.271519 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226) 2026-02-13 03:29:18.271530 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226) 2026-02-13 03:29:18.271540 | orchestrator | 2026-02-13 03:29:18.271551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:18.271562 | orchestrator | Friday 13 February 2026 03:29:16 +0000 (0:00:00.973) 0:00:05.636 ******* 2026-02-13 03:29:18.271573 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:29:18.271584 | orchestrator | 2026-02-13 03:29:18.271595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271605 | orchestrator | Friday 13 February 2026 03:29:16 +0000 (0:00:00.323) 0:00:05.959 ******* 2026-02-13 03:29:18.271616 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-13 03:29:18.271626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-13 03:29:18.271637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-13 03:29:18.271647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-13 03:29:18.271658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-13 03:29:18.271669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-13 03:29:18.271679 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-13 03:29:18.271690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-13 03:29:18.271700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-13 03:29:18.271711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-13 03:29:18.271722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-13 03:29:18.271732 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-13 03:29:18.271743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-13 03:29:18.271753 | orchestrator | 2026-02-13 03:29:18.271764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271774 | orchestrator | Friday 13 February 2026 03:29:16 +0000 (0:00:00.385) 0:00:06.345 ******* 2026-02-13 03:29:18.271785 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271796 | orchestrator | 2026-02-13 03:29:18.271807 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271817 | orchestrator | Friday 13 February 2026 03:29:17 +0000 (0:00:00.203) 0:00:06.549 ******* 2026-02-13 03:29:18.271828 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271838 | orchestrator | 2026-02-13 03:29:18.271849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271860 | orchestrator | Friday 13 February 2026 03:29:17 +0000 (0:00:00.206) 0:00:06.755 ******* 2026-02-13 03:29:18.271870 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271881 | orchestrator | 2026-02-13 03:29:18.271892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271903 | orchestrator | Friday 13 February 2026 03:29:17 +0000 (0:00:00.198) 0:00:06.954 ******* 2026-02-13 03:29:18.271920 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271931 | orchestrator | 2026-02-13 03:29:18.271942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271953 | orchestrator | Friday 13 February 2026 03:29:17 +0000 (0:00:00.212) 0:00:07.167 ******* 2026-02-13 03:29:18.271963 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.271974 | orchestrator | 2026-02-13 03:29:18.271985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.271995 | orchestrator | Friday 13 February 2026 03:29:17 +0000 (0:00:00.202) 0:00:07.369 ******* 2026-02-13 03:29:18.272006 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.272017 | orchestrator | 2026-02-13 03:29:18.272027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:18.272038 | orchestrator | Friday 13 February 2026 03:29:18 +0000 (0:00:00.196) 0:00:07.565 ******* 2026-02-13 03:29:18.272049 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:18.272060 | orchestrator | 2026-02-13 03:29:18.272076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912299 | orchestrator | Friday 13 February 2026 03:29:18 +0000 (0:00:00.228) 0:00:07.793 ******* 2026-02-13 03:29:25.912412 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912428 | orchestrator | 2026-02-13 03:29:25.912442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912453 | orchestrator | Friday 13 February 2026 03:29:18 +0000 (0:00:00.207) 0:00:08.001 ******* 2026-02-13 03:29:25.912465 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-13 03:29:25.912476 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-13 03:29:25.912488 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-13 03:29:25.912514 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-13 03:29:25.912525 | orchestrator | 2026-02-13 03:29:25.912537 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912548 | orchestrator | Friday 13 February 2026 03:29:19 +0000 (0:00:01.058) 0:00:09.059 ******* 2026-02-13 03:29:25.912559 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912570 | orchestrator | 2026-02-13 03:29:25.912581 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912592 | orchestrator | Friday 13 February 2026 03:29:19 +0000 (0:00:00.223) 0:00:09.282 ******* 2026-02-13 03:29:25.912603 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912614 | orchestrator | 2026-02-13 03:29:25.912625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912636 | orchestrator | Friday 13 February 2026 03:29:19 +0000 (0:00:00.200) 0:00:09.483 ******* 2026-02-13 03:29:25.912647 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912658 | orchestrator | 2026-02-13 03:29:25.912669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:25.912679 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.216) 0:00:09.700 ******* 2026-02-13 03:29:25.912690 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912701 | orchestrator | 2026-02-13 03:29:25.912712 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-13 03:29:25.912723 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.211) 0:00:09.911 ******* 2026-02-13 03:29:25.912734 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-13 03:29:25.912745 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-13 03:29:25.912756 | orchestrator | 2026-02-13 03:29:25.912767 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-13 03:29:25.912778 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.177) 0:00:10.089 ******* 2026-02-13 03:29:25.912789 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912799 | orchestrator | 2026-02-13 03:29:25.912810 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-13 03:29:25.912821 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.130) 0:00:10.220 ******* 2026-02-13 03:29:25.912855 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912868 | orchestrator | 2026-02-13 03:29:25.912881 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-13 03:29:25.912894 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.148) 0:00:10.368 ******* 2026-02-13 03:29:25.912906 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.912918 | orchestrator | 2026-02-13 03:29:25.912931 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-13 03:29:25.912943 | orchestrator | Friday 13 February 2026 03:29:20 +0000 (0:00:00.134) 0:00:10.503 ******* 2026-02-13 03:29:25.912956 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:25.912968 | orchestrator | 2026-02-13 03:29:25.912980 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-13 03:29:25.912992 | orchestrator | Friday 13 February 2026 03:29:21 +0000 (0:00:00.144) 0:00:10.648 ******* 2026-02-13 03:29:25.913005 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}}) 2026-02-13 03:29:25.913077 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7c5ad083-16ef-5861-9238-a28b124c66ab'}}) 2026-02-13 03:29:25.913090 | orchestrator | 2026-02-13 03:29:25.913129 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-13 03:29:25.913143 | orchestrator | Friday 13 February 2026 03:29:21 +0000 (0:00:00.165) 0:00:10.813 ******* 2026-02-13 03:29:25.913156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}})  2026-02-13 03:29:25.913170 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7c5ad083-16ef-5861-9238-a28b124c66ab'}})  2026-02-13 03:29:25.913182 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913195 | orchestrator | 2026-02-13 03:29:25.913208 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-13 03:29:25.913218 | orchestrator | Friday 13 February 2026 03:29:21 +0000 (0:00:00.344) 0:00:11.158 ******* 2026-02-13 03:29:25.913229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}})  2026-02-13 03:29:25.913240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7c5ad083-16ef-5861-9238-a28b124c66ab'}})  2026-02-13 03:29:25.913251 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913261 | orchestrator | 2026-02-13 03:29:25.913272 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-13 03:29:25.913283 | orchestrator | Friday 13 February 2026 03:29:21 +0000 (0:00:00.158) 0:00:11.317 ******* 2026-02-13 03:29:25.913294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}})  2026-02-13 03:29:25.913322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7c5ad083-16ef-5861-9238-a28b124c66ab'}})  2026-02-13 03:29:25.913334 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913345 | orchestrator | 2026-02-13 03:29:25.913356 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-13 03:29:25.913367 | orchestrator | Friday 13 February 2026 03:29:21 +0000 (0:00:00.142) 0:00:11.460 ******* 2026-02-13 03:29:25.913378 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:25.913389 | orchestrator | 2026-02-13 03:29:25.913400 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-13 03:29:25.913417 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.148) 0:00:11.608 ******* 2026-02-13 03:29:25.913428 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:29:25.913439 | orchestrator | 2026-02-13 03:29:25.913505 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-13 03:29:25.913519 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.144) 0:00:11.753 ******* 2026-02-13 03:29:25.913541 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913551 | orchestrator | 2026-02-13 03:29:25.913563 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-13 03:29:25.913574 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.137) 0:00:11.890 ******* 2026-02-13 03:29:25.913584 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913595 | orchestrator | 2026-02-13 03:29:25.913605 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-13 03:29:25.913616 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.137) 0:00:12.028 ******* 2026-02-13 03:29:25.913627 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913638 | orchestrator | 2026-02-13 03:29:25.913648 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-13 03:29:25.913659 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.139) 0:00:12.167 ******* 2026-02-13 03:29:25.913670 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:29:25.913680 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:25.913691 | orchestrator |  "sdb": { 2026-02-13 03:29:25.913703 | orchestrator |  "osd_lvm_uuid": "90d7f9ba-9289-5e80-9038-1ad4979f4e3f" 2026-02-13 03:29:25.913713 | orchestrator |  }, 2026-02-13 03:29:25.913724 | orchestrator |  "sdc": { 2026-02-13 03:29:25.913735 | orchestrator |  "osd_lvm_uuid": "7c5ad083-16ef-5861-9238-a28b124c66ab" 2026-02-13 03:29:25.913746 | orchestrator |  } 2026-02-13 03:29:25.913756 | orchestrator |  } 2026-02-13 03:29:25.913767 | orchestrator | } 2026-02-13 03:29:25.913778 | orchestrator | 2026-02-13 03:29:25.913789 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-13 03:29:25.913800 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.134) 0:00:12.301 ******* 2026-02-13 03:29:25.913811 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913821 | orchestrator | 2026-02-13 03:29:25.913832 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-13 03:29:25.913843 | orchestrator | Friday 13 February 2026 03:29:22 +0000 (0:00:00.141) 0:00:12.443 ******* 2026-02-13 03:29:25.913854 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913864 | orchestrator | 2026-02-13 03:29:25.913875 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-13 03:29:25.913886 | orchestrator | Friday 13 February 2026 03:29:23 +0000 (0:00:00.136) 0:00:12.579 ******* 2026-02-13 03:29:25.913896 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:29:25.913907 | orchestrator | 2026-02-13 03:29:25.913918 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-13 03:29:25.913998 | orchestrator | Friday 13 February 2026 03:29:23 +0000 (0:00:00.136) 0:00:12.716 ******* 2026-02-13 03:29:25.914009 | orchestrator | changed: [testbed-node-3] => { 2026-02-13 03:29:25.914168 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-13 03:29:25.914185 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:25.914197 | orchestrator |  "sdb": { 2026-02-13 03:29:25.914208 | orchestrator |  "osd_lvm_uuid": "90d7f9ba-9289-5e80-9038-1ad4979f4e3f" 2026-02-13 03:29:25.914219 | orchestrator |  }, 2026-02-13 03:29:25.914230 | orchestrator |  "sdc": { 2026-02-13 03:29:25.914252 | orchestrator |  "osd_lvm_uuid": "7c5ad083-16ef-5861-9238-a28b124c66ab" 2026-02-13 03:29:25.914263 | orchestrator |  } 2026-02-13 03:29:25.914274 | orchestrator |  }, 2026-02-13 03:29:25.914284 | orchestrator |  "lvm_volumes": [ 2026-02-13 03:29:25.914295 | orchestrator |  { 2026-02-13 03:29:25.914306 | orchestrator |  "data": "osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f", 2026-02-13 03:29:25.914317 | orchestrator |  "data_vg": "ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f" 2026-02-13 03:29:25.914327 | orchestrator |  }, 2026-02-13 03:29:25.914338 | orchestrator |  { 2026-02-13 03:29:25.914349 | orchestrator |  "data": "osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab", 2026-02-13 03:29:25.914370 | orchestrator |  "data_vg": "ceph-7c5ad083-16ef-5861-9238-a28b124c66ab" 2026-02-13 03:29:25.914381 | orchestrator |  } 2026-02-13 03:29:25.914392 | orchestrator |  ] 2026-02-13 03:29:25.914402 | orchestrator |  } 2026-02-13 03:29:25.914413 | orchestrator | } 2026-02-13 03:29:25.914424 | orchestrator | 2026-02-13 03:29:25.914435 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-13 03:29:25.914445 | orchestrator | Friday 13 February 2026 03:29:23 +0000 (0:00:00.418) 0:00:13.134 ******* 2026-02-13 03:29:25.914456 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:25.914467 | orchestrator | 2026-02-13 03:29:25.914477 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-13 03:29:25.914488 | orchestrator | 2026-02-13 03:29:25.914499 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:29:25.914509 | orchestrator | Friday 13 February 2026 03:29:25 +0000 (0:00:01.783) 0:00:14.917 ******* 2026-02-13 03:29:25.914520 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:25.914531 | orchestrator | 2026-02-13 03:29:25.914542 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:29:25.914552 | orchestrator | Friday 13 February 2026 03:29:25 +0000 (0:00:00.274) 0:00:15.192 ******* 2026-02-13 03:29:25.914563 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:25.914574 | orchestrator | 2026-02-13 03:29:25.914598 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.020452 | orchestrator | Friday 13 February 2026 03:29:25 +0000 (0:00:00.246) 0:00:15.439 ******* 2026-02-13 03:29:35.020596 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-13 03:29:35.020623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-13 03:29:35.020644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-13 03:29:35.020684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-13 03:29:35.020706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-13 03:29:35.020727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-13 03:29:35.020747 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-13 03:29:35.020767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-13 03:29:35.020788 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-13 03:29:35.020807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-13 03:29:35.020827 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-13 03:29:35.020847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-13 03:29:35.020867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-13 03:29:35.020887 | orchestrator | 2026-02-13 03:29:35.020909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.020928 | orchestrator | Friday 13 February 2026 03:29:26 +0000 (0:00:00.382) 0:00:15.821 ******* 2026-02-13 03:29:35.020949 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.020969 | orchestrator | 2026-02-13 03:29:35.020991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021012 | orchestrator | Friday 13 February 2026 03:29:26 +0000 (0:00:00.220) 0:00:16.042 ******* 2026-02-13 03:29:35.021031 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021050 | orchestrator | 2026-02-13 03:29:35.021068 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021089 | orchestrator | Friday 13 February 2026 03:29:26 +0000 (0:00:00.203) 0:00:16.245 ******* 2026-02-13 03:29:35.021171 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021193 | orchestrator | 2026-02-13 03:29:35.021214 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021236 | orchestrator | Friday 13 February 2026 03:29:26 +0000 (0:00:00.214) 0:00:16.459 ******* 2026-02-13 03:29:35.021256 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021277 | orchestrator | 2026-02-13 03:29:35.021297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021316 | orchestrator | Friday 13 February 2026 03:29:27 +0000 (0:00:00.609) 0:00:17.068 ******* 2026-02-13 03:29:35.021336 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021356 | orchestrator | 2026-02-13 03:29:35.021376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021396 | orchestrator | Friday 13 February 2026 03:29:27 +0000 (0:00:00.210) 0:00:17.279 ******* 2026-02-13 03:29:35.021416 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021436 | orchestrator | 2026-02-13 03:29:35.021457 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021477 | orchestrator | Friday 13 February 2026 03:29:27 +0000 (0:00:00.245) 0:00:17.525 ******* 2026-02-13 03:29:35.021496 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021516 | orchestrator | 2026-02-13 03:29:35.021537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021558 | orchestrator | Friday 13 February 2026 03:29:28 +0000 (0:00:00.209) 0:00:17.734 ******* 2026-02-13 03:29:35.021578 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.021597 | orchestrator | 2026-02-13 03:29:35.021618 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021639 | orchestrator | Friday 13 February 2026 03:29:28 +0000 (0:00:00.232) 0:00:17.967 ******* 2026-02-13 03:29:35.021659 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d) 2026-02-13 03:29:35.021682 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d) 2026-02-13 03:29:35.021703 | orchestrator | 2026-02-13 03:29:35.021722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021742 | orchestrator | Friday 13 February 2026 03:29:28 +0000 (0:00:00.425) 0:00:18.392 ******* 2026-02-13 03:29:35.021762 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788) 2026-02-13 03:29:35.021782 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788) 2026-02-13 03:29:35.021801 | orchestrator | 2026-02-13 03:29:35.021821 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021839 | orchestrator | Friday 13 February 2026 03:29:29 +0000 (0:00:00.452) 0:00:18.845 ******* 2026-02-13 03:29:35.021856 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52) 2026-02-13 03:29:35.021874 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52) 2026-02-13 03:29:35.021891 | orchestrator | 2026-02-13 03:29:35.021908 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.021952 | orchestrator | Friday 13 February 2026 03:29:29 +0000 (0:00:00.450) 0:00:19.296 ******* 2026-02-13 03:29:35.021972 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460) 2026-02-13 03:29:35.021991 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460) 2026-02-13 03:29:35.022010 | orchestrator | 2026-02-13 03:29:35.022142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:35.022175 | orchestrator | Friday 13 February 2026 03:29:30 +0000 (0:00:00.651) 0:00:19.947 ******* 2026-02-13 03:29:35.022196 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:29:35.022232 | orchestrator | 2026-02-13 03:29:35.022252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022263 | orchestrator | Friday 13 February 2026 03:29:30 +0000 (0:00:00.547) 0:00:20.495 ******* 2026-02-13 03:29:35.022274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-13 03:29:35.022285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-13 03:29:35.022295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-13 03:29:35.022306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-13 03:29:35.022316 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-13 03:29:35.022327 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-13 03:29:35.022338 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-13 03:29:35.022348 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-13 03:29:35.022359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-13 03:29:35.022370 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-13 03:29:35.022381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-13 03:29:35.022392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-13 03:29:35.022403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-13 03:29:35.022413 | orchestrator | 2026-02-13 03:29:35.022424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022435 | orchestrator | Friday 13 February 2026 03:29:31 +0000 (0:00:00.847) 0:00:21.342 ******* 2026-02-13 03:29:35.022446 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022457 | orchestrator | 2026-02-13 03:29:35.022468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022478 | orchestrator | Friday 13 February 2026 03:29:32 +0000 (0:00:00.211) 0:00:21.554 ******* 2026-02-13 03:29:35.022489 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022500 | orchestrator | 2026-02-13 03:29:35.022510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022521 | orchestrator | Friday 13 February 2026 03:29:32 +0000 (0:00:00.219) 0:00:21.773 ******* 2026-02-13 03:29:35.022532 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022542 | orchestrator | 2026-02-13 03:29:35.022553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022564 | orchestrator | Friday 13 February 2026 03:29:32 +0000 (0:00:00.205) 0:00:21.979 ******* 2026-02-13 03:29:35.022574 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022585 | orchestrator | 2026-02-13 03:29:35.022596 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022606 | orchestrator | Friday 13 February 2026 03:29:32 +0000 (0:00:00.205) 0:00:22.185 ******* 2026-02-13 03:29:35.022617 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022627 | orchestrator | 2026-02-13 03:29:35.022638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022649 | orchestrator | Friday 13 February 2026 03:29:32 +0000 (0:00:00.207) 0:00:22.392 ******* 2026-02-13 03:29:35.022660 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022670 | orchestrator | 2026-02-13 03:29:35.022685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022703 | orchestrator | Friday 13 February 2026 03:29:33 +0000 (0:00:00.214) 0:00:22.606 ******* 2026-02-13 03:29:35.022721 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022749 | orchestrator | 2026-02-13 03:29:35.022764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022780 | orchestrator | Friday 13 February 2026 03:29:33 +0000 (0:00:00.213) 0:00:22.820 ******* 2026-02-13 03:29:35.022790 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:35.022800 | orchestrator | 2026-02-13 03:29:35.022810 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022819 | orchestrator | Friday 13 February 2026 03:29:33 +0000 (0:00:00.206) 0:00:23.026 ******* 2026-02-13 03:29:35.022829 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-13 03:29:35.022839 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-13 03:29:35.022849 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-13 03:29:35.022859 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-13 03:29:35.022868 | orchestrator | 2026-02-13 03:29:35.022878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:35.022888 | orchestrator | Friday 13 February 2026 03:29:34 +0000 (0:00:00.869) 0:00:23.896 ******* 2026-02-13 03:29:35.022897 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.016800 | orchestrator | 2026-02-13 03:29:41.016912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:41.016929 | orchestrator | Friday 13 February 2026 03:29:35 +0000 (0:00:00.654) 0:00:24.550 ******* 2026-02-13 03:29:41.016941 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.016952 | orchestrator | 2026-02-13 03:29:41.016964 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:41.016976 | orchestrator | Friday 13 February 2026 03:29:35 +0000 (0:00:00.212) 0:00:24.762 ******* 2026-02-13 03:29:41.017004 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017016 | orchestrator | 2026-02-13 03:29:41.017027 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:41.017038 | orchestrator | Friday 13 February 2026 03:29:35 +0000 (0:00:00.224) 0:00:24.987 ******* 2026-02-13 03:29:41.017049 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017060 | orchestrator | 2026-02-13 03:29:41.017071 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-13 03:29:41.017082 | orchestrator | Friday 13 February 2026 03:29:35 +0000 (0:00:00.231) 0:00:25.218 ******* 2026-02-13 03:29:41.017093 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-13 03:29:41.017104 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-13 03:29:41.017154 | orchestrator | 2026-02-13 03:29:41.017166 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-13 03:29:41.017176 | orchestrator | Friday 13 February 2026 03:29:35 +0000 (0:00:00.177) 0:00:25.396 ******* 2026-02-13 03:29:41.017187 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017198 | orchestrator | 2026-02-13 03:29:41.017210 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-13 03:29:41.017220 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.147) 0:00:25.543 ******* 2026-02-13 03:29:41.017231 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017242 | orchestrator | 2026-02-13 03:29:41.017253 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-13 03:29:41.017264 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.137) 0:00:25.681 ******* 2026-02-13 03:29:41.017275 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017285 | orchestrator | 2026-02-13 03:29:41.017296 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-13 03:29:41.017307 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.138) 0:00:25.819 ******* 2026-02-13 03:29:41.017318 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:41.017330 | orchestrator | 2026-02-13 03:29:41.017342 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-13 03:29:41.017355 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.148) 0:00:25.968 ******* 2026-02-13 03:29:41.017391 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}}) 2026-02-13 03:29:41.017405 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5ce47f09-4cf3-58ef-8e90-2b997425535f'}}) 2026-02-13 03:29:41.017418 | orchestrator | 2026-02-13 03:29:41.017431 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-13 03:29:41.017444 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.162) 0:00:26.130 ******* 2026-02-13 03:29:41.017457 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}})  2026-02-13 03:29:41.017472 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5ce47f09-4cf3-58ef-8e90-2b997425535f'}})  2026-02-13 03:29:41.017485 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017497 | orchestrator | 2026-02-13 03:29:41.017510 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-13 03:29:41.017523 | orchestrator | Friday 13 February 2026 03:29:36 +0000 (0:00:00.157) 0:00:26.288 ******* 2026-02-13 03:29:41.017535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}})  2026-02-13 03:29:41.017553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5ce47f09-4cf3-58ef-8e90-2b997425535f'}})  2026-02-13 03:29:41.017573 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017585 | orchestrator | 2026-02-13 03:29:41.017595 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-13 03:29:41.017606 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.360) 0:00:26.648 ******* 2026-02-13 03:29:41.017617 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}})  2026-02-13 03:29:41.017628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5ce47f09-4cf3-58ef-8e90-2b997425535f'}})  2026-02-13 03:29:41.017639 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017649 | orchestrator | 2026-02-13 03:29:41.017660 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-13 03:29:41.017671 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.167) 0:00:26.816 ******* 2026-02-13 03:29:41.017682 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:41.017692 | orchestrator | 2026-02-13 03:29:41.017703 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-13 03:29:41.017714 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.146) 0:00:26.962 ******* 2026-02-13 03:29:41.017724 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:29:41.017735 | orchestrator | 2026-02-13 03:29:41.017746 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-13 03:29:41.017756 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.150) 0:00:27.113 ******* 2026-02-13 03:29:41.017787 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017804 | orchestrator | 2026-02-13 03:29:41.017816 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-13 03:29:41.017827 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.143) 0:00:27.257 ******* 2026-02-13 03:29:41.017837 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017848 | orchestrator | 2026-02-13 03:29:41.017859 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-13 03:29:41.017870 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.134) 0:00:27.392 ******* 2026-02-13 03:29:41.017887 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.017898 | orchestrator | 2026-02-13 03:29:41.017909 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-13 03:29:41.017919 | orchestrator | Friday 13 February 2026 03:29:37 +0000 (0:00:00.133) 0:00:27.525 ******* 2026-02-13 03:29:41.017938 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:29:41.017949 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:41.017960 | orchestrator |  "sdb": { 2026-02-13 03:29:41.017971 | orchestrator |  "osd_lvm_uuid": "43dba57c-3e97-52bb-978e-0b7bf56fe0c6" 2026-02-13 03:29:41.017982 | orchestrator |  }, 2026-02-13 03:29:41.017993 | orchestrator |  "sdc": { 2026-02-13 03:29:41.018005 | orchestrator |  "osd_lvm_uuid": "5ce47f09-4cf3-58ef-8e90-2b997425535f" 2026-02-13 03:29:41.018084 | orchestrator |  } 2026-02-13 03:29:41.018096 | orchestrator |  } 2026-02-13 03:29:41.018107 | orchestrator | } 2026-02-13 03:29:41.018142 | orchestrator | 2026-02-13 03:29:41.018153 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-13 03:29:41.018164 | orchestrator | Friday 13 February 2026 03:29:38 +0000 (0:00:00.149) 0:00:27.675 ******* 2026-02-13 03:29:41.018175 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.018186 | orchestrator | 2026-02-13 03:29:41.018203 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-13 03:29:41.018219 | orchestrator | Friday 13 February 2026 03:29:38 +0000 (0:00:00.136) 0:00:27.811 ******* 2026-02-13 03:29:41.018230 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.018241 | orchestrator | 2026-02-13 03:29:41.018251 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-13 03:29:41.018262 | orchestrator | Friday 13 February 2026 03:29:38 +0000 (0:00:00.142) 0:00:27.954 ******* 2026-02-13 03:29:41.018273 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:29:41.018283 | orchestrator | 2026-02-13 03:29:41.018294 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-13 03:29:41.018304 | orchestrator | Friday 13 February 2026 03:29:38 +0000 (0:00:00.150) 0:00:28.105 ******* 2026-02-13 03:29:41.018315 | orchestrator | changed: [testbed-node-4] => { 2026-02-13 03:29:41.018326 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-13 03:29:41.018337 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:41.018347 | orchestrator |  "sdb": { 2026-02-13 03:29:41.018358 | orchestrator |  "osd_lvm_uuid": "43dba57c-3e97-52bb-978e-0b7bf56fe0c6" 2026-02-13 03:29:41.018369 | orchestrator |  }, 2026-02-13 03:29:41.018379 | orchestrator |  "sdc": { 2026-02-13 03:29:41.018390 | orchestrator |  "osd_lvm_uuid": "5ce47f09-4cf3-58ef-8e90-2b997425535f" 2026-02-13 03:29:41.018401 | orchestrator |  } 2026-02-13 03:29:41.018411 | orchestrator |  }, 2026-02-13 03:29:41.018422 | orchestrator |  "lvm_volumes": [ 2026-02-13 03:29:41.018433 | orchestrator |  { 2026-02-13 03:29:41.018444 | orchestrator |  "data": "osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6", 2026-02-13 03:29:41.018454 | orchestrator |  "data_vg": "ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6" 2026-02-13 03:29:41.018465 | orchestrator |  }, 2026-02-13 03:29:41.018476 | orchestrator |  { 2026-02-13 03:29:41.018487 | orchestrator |  "data": "osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f", 2026-02-13 03:29:41.018497 | orchestrator |  "data_vg": "ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f" 2026-02-13 03:29:41.018508 | orchestrator |  } 2026-02-13 03:29:41.018519 | orchestrator |  ] 2026-02-13 03:29:41.018530 | orchestrator |  } 2026-02-13 03:29:41.018541 | orchestrator | } 2026-02-13 03:29:41.018551 | orchestrator | 2026-02-13 03:29:41.018562 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-13 03:29:41.018573 | orchestrator | Friday 13 February 2026 03:29:38 +0000 (0:00:00.410) 0:00:28.515 ******* 2026-02-13 03:29:41.018583 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:41.018594 | orchestrator | 2026-02-13 03:29:41.018605 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-13 03:29:41.018615 | orchestrator | 2026-02-13 03:29:41.018626 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:29:41.018637 | orchestrator | Friday 13 February 2026 03:29:40 +0000 (0:00:01.164) 0:00:29.680 ******* 2026-02-13 03:29:41.018655 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:41.018666 | orchestrator | 2026-02-13 03:29:41.018677 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:29:41.018687 | orchestrator | Friday 13 February 2026 03:29:40 +0000 (0:00:00.255) 0:00:29.936 ******* 2026-02-13 03:29:41.018698 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:41.018709 | orchestrator | 2026-02-13 03:29:41.018720 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:41.018730 | orchestrator | Friday 13 February 2026 03:29:40 +0000 (0:00:00.232) 0:00:30.168 ******* 2026-02-13 03:29:41.018741 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-13 03:29:41.018751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-13 03:29:41.018762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-13 03:29:41.018772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-13 03:29:41.018783 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-13 03:29:41.018802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-13 03:29:49.484028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-13 03:29:49.484251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-13 03:29:49.484286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-13 03:29:49.484307 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-13 03:29:49.484347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-13 03:29:49.484368 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-13 03:29:49.484388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-13 03:29:49.484409 | orchestrator | 2026-02-13 03:29:49.484431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484452 | orchestrator | Friday 13 February 2026 03:29:41 +0000 (0:00:00.371) 0:00:30.540 ******* 2026-02-13 03:29:49.484473 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484494 | orchestrator | 2026-02-13 03:29:49.484515 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484535 | orchestrator | Friday 13 February 2026 03:29:41 +0000 (0:00:00.215) 0:00:30.756 ******* 2026-02-13 03:29:49.484556 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484577 | orchestrator | 2026-02-13 03:29:49.484596 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484618 | orchestrator | Friday 13 February 2026 03:29:41 +0000 (0:00:00.193) 0:00:30.949 ******* 2026-02-13 03:29:49.484639 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484659 | orchestrator | 2026-02-13 03:29:49.484680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484701 | orchestrator | Friday 13 February 2026 03:29:41 +0000 (0:00:00.194) 0:00:31.144 ******* 2026-02-13 03:29:49.484720 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484738 | orchestrator | 2026-02-13 03:29:49.484757 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484776 | orchestrator | Friday 13 February 2026 03:29:42 +0000 (0:00:00.572) 0:00:31.717 ******* 2026-02-13 03:29:49.484795 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484813 | orchestrator | 2026-02-13 03:29:49.484832 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484850 | orchestrator | Friday 13 February 2026 03:29:42 +0000 (0:00:00.214) 0:00:31.931 ******* 2026-02-13 03:29:49.484899 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484919 | orchestrator | 2026-02-13 03:29:49.484938 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.484955 | orchestrator | Friday 13 February 2026 03:29:42 +0000 (0:00:00.207) 0:00:32.138 ******* 2026-02-13 03:29:49.484973 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.484991 | orchestrator | 2026-02-13 03:29:49.485009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485027 | orchestrator | Friday 13 February 2026 03:29:42 +0000 (0:00:00.212) 0:00:32.351 ******* 2026-02-13 03:29:49.485045 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.485063 | orchestrator | 2026-02-13 03:29:49.485080 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485099 | orchestrator | Friday 13 February 2026 03:29:43 +0000 (0:00:00.210) 0:00:32.561 ******* 2026-02-13 03:29:49.485141 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d) 2026-02-13 03:29:49.485160 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d) 2026-02-13 03:29:49.485176 | orchestrator | 2026-02-13 03:29:49.485195 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485213 | orchestrator | Friday 13 February 2026 03:29:43 +0000 (0:00:00.446) 0:00:33.008 ******* 2026-02-13 03:29:49.485231 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e) 2026-02-13 03:29:49.485249 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e) 2026-02-13 03:29:49.485267 | orchestrator | 2026-02-13 03:29:49.485285 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485302 | orchestrator | Friday 13 February 2026 03:29:43 +0000 (0:00:00.427) 0:00:33.435 ******* 2026-02-13 03:29:49.485321 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3) 2026-02-13 03:29:49.485339 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3) 2026-02-13 03:29:49.485357 | orchestrator | 2026-02-13 03:29:49.485375 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485393 | orchestrator | Friday 13 February 2026 03:29:44 +0000 (0:00:00.413) 0:00:33.849 ******* 2026-02-13 03:29:49.485412 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d) 2026-02-13 03:29:49.485430 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d) 2026-02-13 03:29:49.485448 | orchestrator | 2026-02-13 03:29:49.485467 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:29:49.485485 | orchestrator | Friday 13 February 2026 03:29:44 +0000 (0:00:00.443) 0:00:34.292 ******* 2026-02-13 03:29:49.485502 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:29:49.485520 | orchestrator | 2026-02-13 03:29:49.485538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.485581 | orchestrator | Friday 13 February 2026 03:29:45 +0000 (0:00:00.351) 0:00:34.643 ******* 2026-02-13 03:29:49.485599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-13 03:29:49.485618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-13 03:29:49.485636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-13 03:29:49.485661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-13 03:29:49.485680 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-13 03:29:49.485698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-13 03:29:49.485728 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-13 03:29:49.485746 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-13 03:29:49.485763 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-13 03:29:49.485781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-13 03:29:49.485799 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-13 03:29:49.485817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-13 03:29:49.485835 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-13 03:29:49.485853 | orchestrator | 2026-02-13 03:29:49.485871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.485889 | orchestrator | Friday 13 February 2026 03:29:45 +0000 (0:00:00.575) 0:00:35.219 ******* 2026-02-13 03:29:49.485907 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.485926 | orchestrator | 2026-02-13 03:29:49.485943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.485961 | orchestrator | Friday 13 February 2026 03:29:45 +0000 (0:00:00.196) 0:00:35.416 ******* 2026-02-13 03:29:49.485979 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.485997 | orchestrator | 2026-02-13 03:29:49.486091 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486137 | orchestrator | Friday 13 February 2026 03:29:46 +0000 (0:00:00.207) 0:00:35.624 ******* 2026-02-13 03:29:49.486154 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486171 | orchestrator | 2026-02-13 03:29:49.486188 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486206 | orchestrator | Friday 13 February 2026 03:29:46 +0000 (0:00:00.208) 0:00:35.832 ******* 2026-02-13 03:29:49.486224 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486242 | orchestrator | 2026-02-13 03:29:49.486260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486278 | orchestrator | Friday 13 February 2026 03:29:46 +0000 (0:00:00.205) 0:00:36.038 ******* 2026-02-13 03:29:49.486296 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486314 | orchestrator | 2026-02-13 03:29:49.486331 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486349 | orchestrator | Friday 13 February 2026 03:29:46 +0000 (0:00:00.205) 0:00:36.244 ******* 2026-02-13 03:29:49.486367 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486385 | orchestrator | 2026-02-13 03:29:49.486403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486421 | orchestrator | Friday 13 February 2026 03:29:46 +0000 (0:00:00.221) 0:00:36.466 ******* 2026-02-13 03:29:49.486439 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486457 | orchestrator | 2026-02-13 03:29:49.486474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486493 | orchestrator | Friday 13 February 2026 03:29:47 +0000 (0:00:00.211) 0:00:36.677 ******* 2026-02-13 03:29:49.486510 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486528 | orchestrator | 2026-02-13 03:29:49.486546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486564 | orchestrator | Friday 13 February 2026 03:29:47 +0000 (0:00:00.197) 0:00:36.875 ******* 2026-02-13 03:29:49.486582 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-13 03:29:49.486600 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-13 03:29:49.486619 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-13 03:29:49.486637 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-13 03:29:49.486655 | orchestrator | 2026-02-13 03:29:49.486685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486703 | orchestrator | Friday 13 February 2026 03:29:48 +0000 (0:00:00.859) 0:00:37.734 ******* 2026-02-13 03:29:49.486721 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486738 | orchestrator | 2026-02-13 03:29:49.486756 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486774 | orchestrator | Friday 13 February 2026 03:29:48 +0000 (0:00:00.213) 0:00:37.948 ******* 2026-02-13 03:29:49.486792 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486810 | orchestrator | 2026-02-13 03:29:49.486828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486846 | orchestrator | Friday 13 February 2026 03:29:48 +0000 (0:00:00.196) 0:00:38.145 ******* 2026-02-13 03:29:49.486864 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486882 | orchestrator | 2026-02-13 03:29:49.486900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:29:49.486918 | orchestrator | Friday 13 February 2026 03:29:49 +0000 (0:00:00.661) 0:00:38.806 ******* 2026-02-13 03:29:49.486936 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:49.486954 | orchestrator | 2026-02-13 03:29:49.486985 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-13 03:29:53.589497 | orchestrator | Friday 13 February 2026 03:29:49 +0000 (0:00:00.207) 0:00:39.013 ******* 2026-02-13 03:29:53.589602 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-13 03:29:53.589618 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-13 03:29:53.589630 | orchestrator | 2026-02-13 03:29:53.589642 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-13 03:29:53.589672 | orchestrator | Friday 13 February 2026 03:29:49 +0000 (0:00:00.175) 0:00:39.189 ******* 2026-02-13 03:29:53.589684 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.589696 | orchestrator | 2026-02-13 03:29:53.589707 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-13 03:29:53.589718 | orchestrator | Friday 13 February 2026 03:29:49 +0000 (0:00:00.160) 0:00:39.349 ******* 2026-02-13 03:29:53.589729 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.589740 | orchestrator | 2026-02-13 03:29:53.589751 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-13 03:29:53.589762 | orchestrator | Friday 13 February 2026 03:29:49 +0000 (0:00:00.135) 0:00:39.485 ******* 2026-02-13 03:29:53.589773 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.589784 | orchestrator | 2026-02-13 03:29:53.589795 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-13 03:29:53.589806 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.161) 0:00:39.646 ******* 2026-02-13 03:29:53.589816 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:53.589828 | orchestrator | 2026-02-13 03:29:53.589838 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-13 03:29:53.589849 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.146) 0:00:39.793 ******* 2026-02-13 03:29:53.589861 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8151fb69-3858-5887-af01-e0d44d84b3e6'}}) 2026-02-13 03:29:53.589872 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5f44536a-6e14-5adc-b1bb-0c010a1280f1'}}) 2026-02-13 03:29:53.589883 | orchestrator | 2026-02-13 03:29:53.589894 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-13 03:29:53.589904 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.175) 0:00:39.969 ******* 2026-02-13 03:29:53.589916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8151fb69-3858-5887-af01-e0d44d84b3e6'}})  2026-02-13 03:29:53.589928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5f44536a-6e14-5adc-b1bb-0c010a1280f1'}})  2026-02-13 03:29:53.589939 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.589973 | orchestrator | 2026-02-13 03:29:53.589986 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-13 03:29:53.589996 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.155) 0:00:40.124 ******* 2026-02-13 03:29:53.590007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8151fb69-3858-5887-af01-e0d44d84b3e6'}})  2026-02-13 03:29:53.590084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5f44536a-6e14-5adc-b1bb-0c010a1280f1'}})  2026-02-13 03:29:53.590099 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590111 | orchestrator | 2026-02-13 03:29:53.590214 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-13 03:29:53.590228 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.156) 0:00:40.280 ******* 2026-02-13 03:29:53.590240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8151fb69-3858-5887-af01-e0d44d84b3e6'}})  2026-02-13 03:29:53.590253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5f44536a-6e14-5adc-b1bb-0c010a1280f1'}})  2026-02-13 03:29:53.590266 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590279 | orchestrator | 2026-02-13 03:29:53.590292 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-13 03:29:53.590304 | orchestrator | Friday 13 February 2026 03:29:50 +0000 (0:00:00.160) 0:00:40.440 ******* 2026-02-13 03:29:53.590315 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:53.590325 | orchestrator | 2026-02-13 03:29:53.590337 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-13 03:29:53.590349 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.158) 0:00:40.599 ******* 2026-02-13 03:29:53.590360 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:29:53.590370 | orchestrator | 2026-02-13 03:29:53.590381 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-13 03:29:53.590393 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.345) 0:00:40.944 ******* 2026-02-13 03:29:53.590404 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590413 | orchestrator | 2026-02-13 03:29:53.590423 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-13 03:29:53.590432 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.141) 0:00:41.086 ******* 2026-02-13 03:29:53.590442 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590452 | orchestrator | 2026-02-13 03:29:53.590461 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-13 03:29:53.590471 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.134) 0:00:41.221 ******* 2026-02-13 03:29:53.590480 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590490 | orchestrator | 2026-02-13 03:29:53.590500 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-13 03:29:53.590509 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.131) 0:00:41.352 ******* 2026-02-13 03:29:53.590518 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:29:53.590528 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:53.590538 | orchestrator |  "sdb": { 2026-02-13 03:29:53.590567 | orchestrator |  "osd_lvm_uuid": "8151fb69-3858-5887-af01-e0d44d84b3e6" 2026-02-13 03:29:53.590577 | orchestrator |  }, 2026-02-13 03:29:53.590587 | orchestrator |  "sdc": { 2026-02-13 03:29:53.590597 | orchestrator |  "osd_lvm_uuid": "5f44536a-6e14-5adc-b1bb-0c010a1280f1" 2026-02-13 03:29:53.590607 | orchestrator |  } 2026-02-13 03:29:53.590616 | orchestrator |  } 2026-02-13 03:29:53.590626 | orchestrator | } 2026-02-13 03:29:53.590636 | orchestrator | 2026-02-13 03:29:53.590652 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-13 03:29:53.590662 | orchestrator | Friday 13 February 2026 03:29:51 +0000 (0:00:00.146) 0:00:41.499 ******* 2026-02-13 03:29:53.590672 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590692 | orchestrator | 2026-02-13 03:29:53.590702 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-13 03:29:53.590711 | orchestrator | Friday 13 February 2026 03:29:52 +0000 (0:00:00.141) 0:00:41.640 ******* 2026-02-13 03:29:53.590721 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590730 | orchestrator | 2026-02-13 03:29:53.590740 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-13 03:29:53.590749 | orchestrator | Friday 13 February 2026 03:29:52 +0000 (0:00:00.130) 0:00:41.770 ******* 2026-02-13 03:29:53.590759 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:29:53.590768 | orchestrator | 2026-02-13 03:29:53.590778 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-13 03:29:53.590788 | orchestrator | Friday 13 February 2026 03:29:52 +0000 (0:00:00.112) 0:00:41.883 ******* 2026-02-13 03:29:53.590797 | orchestrator | changed: [testbed-node-5] => { 2026-02-13 03:29:53.590807 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-13 03:29:53.590817 | orchestrator |  "ceph_osd_devices": { 2026-02-13 03:29:53.590826 | orchestrator |  "sdb": { 2026-02-13 03:29:53.590836 | orchestrator |  "osd_lvm_uuid": "8151fb69-3858-5887-af01-e0d44d84b3e6" 2026-02-13 03:29:53.590846 | orchestrator |  }, 2026-02-13 03:29:53.590856 | orchestrator |  "sdc": { 2026-02-13 03:29:53.590865 | orchestrator |  "osd_lvm_uuid": "5f44536a-6e14-5adc-b1bb-0c010a1280f1" 2026-02-13 03:29:53.590875 | orchestrator |  } 2026-02-13 03:29:53.590885 | orchestrator |  }, 2026-02-13 03:29:53.590894 | orchestrator |  "lvm_volumes": [ 2026-02-13 03:29:53.590904 | orchestrator |  { 2026-02-13 03:29:53.590913 | orchestrator |  "data": "osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6", 2026-02-13 03:29:53.590923 | orchestrator |  "data_vg": "ceph-8151fb69-3858-5887-af01-e0d44d84b3e6" 2026-02-13 03:29:53.590932 | orchestrator |  }, 2026-02-13 03:29:53.590942 | orchestrator |  { 2026-02-13 03:29:53.590952 | orchestrator |  "data": "osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1", 2026-02-13 03:29:53.590961 | orchestrator |  "data_vg": "ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1" 2026-02-13 03:29:53.590971 | orchestrator |  } 2026-02-13 03:29:53.590981 | orchestrator |  ] 2026-02-13 03:29:53.590990 | orchestrator |  } 2026-02-13 03:29:53.591000 | orchestrator | } 2026-02-13 03:29:53.591010 | orchestrator | 2026-02-13 03:29:53.591019 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-13 03:29:53.591029 | orchestrator | Friday 13 February 2026 03:29:52 +0000 (0:00:00.209) 0:00:42.093 ******* 2026-02-13 03:29:53.591038 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-13 03:29:53.591049 | orchestrator | 2026-02-13 03:29:53.591065 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:29:53.591083 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:29:53.591102 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:29:53.591138 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:29:53.591154 | orchestrator | 2026-02-13 03:29:53.591169 | orchestrator | 2026-02-13 03:29:53.591184 | orchestrator | 2026-02-13 03:29:53.591200 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:29:53.591215 | orchestrator | Friday 13 February 2026 03:29:53 +0000 (0:00:01.012) 0:00:43.105 ******* 2026-02-13 03:29:53.591228 | orchestrator | =============================================================================== 2026-02-13 03:29:53.591242 | orchestrator | Write configuration file ------------------------------------------------ 3.96s 2026-02-13 03:29:53.591272 | orchestrator | Add known partitions to the list of available block devices ------------- 1.81s 2026-02-13 03:29:53.591286 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2026-02-13 03:29:53.591301 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-02-13 03:29:53.591316 | orchestrator | Print configuration data ------------------------------------------------ 1.04s 2026-02-13 03:29:53.591331 | orchestrator | Add known links to the list of available block devices ------------------ 0.97s 2026-02-13 03:29:53.591347 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-02-13 03:29:53.591363 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2026-02-13 03:29:53.591379 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.78s 2026-02-13 03:29:53.591395 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2026-02-13 03:29:53.591409 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-02-13 03:29:53.591425 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2026-02-13 03:29:53.591441 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-02-13 03:29:53.591470 | orchestrator | Add known partitions to the list of available block devices ------------- 0.66s 2026-02-13 03:29:53.997530 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.66s 2026-02-13 03:29:53.997599 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2026-02-13 03:29:53.997606 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-02-13 03:29:53.997634 | orchestrator | Set OSD devices config data --------------------------------------------- 0.64s 2026-02-13 03:29:53.997638 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2026-02-13 03:29:53.997642 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2026-02-13 03:30:16.623929 | orchestrator | 2026-02-13 03:30:16 | INFO  | Task 37f95347-3bd0-4556-91f0-2401b5e847b3 (sync inventory) is running in background. Output coming soon. 2026-02-13 03:30:45.099686 | orchestrator | 2026-02-13 03:30:18 | INFO  | Starting group_vars file reorganization 2026-02-13 03:30:45.099799 | orchestrator | 2026-02-13 03:30:18 | INFO  | Moved 0 file(s) to their respective directories 2026-02-13 03:30:45.099836 | orchestrator | 2026-02-13 03:30:18 | INFO  | Group_vars file reorganization completed 2026-02-13 03:30:45.099849 | orchestrator | 2026-02-13 03:30:21 | INFO  | Starting variable preparation from inventory 2026-02-13 03:30:45.099860 | orchestrator | 2026-02-13 03:30:24 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-13 03:30:45.099871 | orchestrator | 2026-02-13 03:30:24 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-13 03:30:45.099882 | orchestrator | 2026-02-13 03:30:24 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-13 03:30:45.099893 | orchestrator | 2026-02-13 03:30:24 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-13 03:30:45.099908 | orchestrator | 2026-02-13 03:30:24 | INFO  | Variable preparation completed 2026-02-13 03:30:45.099927 | orchestrator | 2026-02-13 03:30:25 | INFO  | Starting inventory overwrite handling 2026-02-13 03:30:45.099945 | orchestrator | 2026-02-13 03:30:25 | INFO  | Handling group overwrites in 99-overwrite 2026-02-13 03:30:45.099970 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removing group frr:children from 60-generic 2026-02-13 03:30:45.099993 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-13 03:30:45.100010 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-13 03:30:45.100069 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-13 03:30:45.100089 | orchestrator | 2026-02-13 03:30:25 | INFO  | Handling group overwrites in 20-roles 2026-02-13 03:30:45.100109 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-13 03:30:45.100121 | orchestrator | 2026-02-13 03:30:25 | INFO  | Removed 5 group(s) in total 2026-02-13 03:30:45.100136 | orchestrator | 2026-02-13 03:30:25 | INFO  | Inventory overwrite handling completed 2026-02-13 03:30:45.100155 | orchestrator | 2026-02-13 03:30:26 | INFO  | Starting merge of inventory files 2026-02-13 03:30:45.100245 | orchestrator | 2026-02-13 03:30:26 | INFO  | Inventory files merged successfully 2026-02-13 03:30:45.100266 | orchestrator | 2026-02-13 03:30:32 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-13 03:30:45.100286 | orchestrator | 2026-02-13 03:30:43 | INFO  | Successfully wrote ClusterShell configuration 2026-02-13 03:30:45.100306 | orchestrator | [master 35c63e2] 2026-02-13-03-30 2026-02-13 03:30:45.100327 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-13 03:30:47.442070 | orchestrator | 2026-02-13 03:30:47 | INFO  | Task da60c44b-fc39-43d6-8993-21c8a78dc1ef (ceph-create-lvm-devices) was prepared for execution. 2026-02-13 03:30:47.442137 | orchestrator | 2026-02-13 03:30:47 | INFO  | It takes a moment until task da60c44b-fc39-43d6-8993-21c8a78dc1ef (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-13 03:30:59.236309 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 03:30:59.236455 | orchestrator | 2.16.14 2026-02-13 03:30:59.236487 | orchestrator | 2026-02-13 03:30:59.236508 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-13 03:30:59.236527 | orchestrator | 2026-02-13 03:30:59.236546 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:30:59.236564 | orchestrator | Friday 13 February 2026 03:30:51 +0000 (0:00:00.300) 0:00:00.300 ******* 2026-02-13 03:30:59.236584 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 03:30:59.236603 | orchestrator | 2026-02-13 03:30:59.236621 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:30:59.236641 | orchestrator | Friday 13 February 2026 03:30:51 +0000 (0:00:00.250) 0:00:00.550 ******* 2026-02-13 03:30:59.236654 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:30:59.236665 | orchestrator | 2026-02-13 03:30:59.236676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.236687 | orchestrator | Friday 13 February 2026 03:30:52 +0000 (0:00:00.250) 0:00:00.801 ******* 2026-02-13 03:30:59.236698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-13 03:30:59.236709 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-13 03:30:59.236737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-13 03:30:59.236748 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-13 03:30:59.236759 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-13 03:30:59.236769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-13 03:30:59.236780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-13 03:30:59.236791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-13 03:30:59.236802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-13 03:30:59.236812 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-13 03:30:59.236847 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-13 03:30:59.236858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-13 03:30:59.236869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-13 03:30:59.236879 | orchestrator | 2026-02-13 03:30:59.236890 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.236901 | orchestrator | Friday 13 February 2026 03:30:52 +0000 (0:00:00.511) 0:00:01.313 ******* 2026-02-13 03:30:59.236912 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.236922 | orchestrator | 2026-02-13 03:30:59.236933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.236944 | orchestrator | Friday 13 February 2026 03:30:52 +0000 (0:00:00.201) 0:00:01.514 ******* 2026-02-13 03:30:59.236954 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.236965 | orchestrator | 2026-02-13 03:30:59.236976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.236987 | orchestrator | Friday 13 February 2026 03:30:53 +0000 (0:00:00.207) 0:00:01.721 ******* 2026-02-13 03:30:59.237005 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237022 | orchestrator | 2026-02-13 03:30:59.237034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237045 | orchestrator | Friday 13 February 2026 03:30:53 +0000 (0:00:00.199) 0:00:01.921 ******* 2026-02-13 03:30:59.237055 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237066 | orchestrator | 2026-02-13 03:30:59.237077 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237088 | orchestrator | Friday 13 February 2026 03:30:53 +0000 (0:00:00.208) 0:00:02.129 ******* 2026-02-13 03:30:59.237099 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237110 | orchestrator | 2026-02-13 03:30:59.237120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237131 | orchestrator | Friday 13 February 2026 03:30:53 +0000 (0:00:00.214) 0:00:02.344 ******* 2026-02-13 03:30:59.237142 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237153 | orchestrator | 2026-02-13 03:30:59.237164 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237203 | orchestrator | Friday 13 February 2026 03:30:53 +0000 (0:00:00.186) 0:00:02.530 ******* 2026-02-13 03:30:59.237214 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237225 | orchestrator | 2026-02-13 03:30:59.237236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237247 | orchestrator | Friday 13 February 2026 03:30:54 +0000 (0:00:00.202) 0:00:02.732 ******* 2026-02-13 03:30:59.237258 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237269 | orchestrator | 2026-02-13 03:30:59.237280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237291 | orchestrator | Friday 13 February 2026 03:30:54 +0000 (0:00:00.197) 0:00:02.930 ******* 2026-02-13 03:30:59.237302 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d) 2026-02-13 03:30:59.237314 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d) 2026-02-13 03:30:59.237325 | orchestrator | 2026-02-13 03:30:59.237336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237367 | orchestrator | Friday 13 February 2026 03:30:54 +0000 (0:00:00.420) 0:00:03.350 ******* 2026-02-13 03:30:59.237379 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165) 2026-02-13 03:30:59.237390 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165) 2026-02-13 03:30:59.237401 | orchestrator | 2026-02-13 03:30:59.237412 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237432 | orchestrator | Friday 13 February 2026 03:30:55 +0000 (0:00:00.635) 0:00:03.986 ******* 2026-02-13 03:30:59.237443 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322) 2026-02-13 03:30:59.237454 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322) 2026-02-13 03:30:59.237465 | orchestrator | 2026-02-13 03:30:59.237476 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237487 | orchestrator | Friday 13 February 2026 03:30:56 +0000 (0:00:00.662) 0:00:04.648 ******* 2026-02-13 03:30:59.237497 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226) 2026-02-13 03:30:59.237515 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226) 2026-02-13 03:30:59.237527 | orchestrator | 2026-02-13 03:30:59.237538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:30:59.237549 | orchestrator | Friday 13 February 2026 03:30:56 +0000 (0:00:00.835) 0:00:05.484 ******* 2026-02-13 03:30:59.237560 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:30:59.237571 | orchestrator | 2026-02-13 03:30:59.237582 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237593 | orchestrator | Friday 13 February 2026 03:30:57 +0000 (0:00:00.398) 0:00:05.882 ******* 2026-02-13 03:30:59.237603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-13 03:30:59.237614 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-13 03:30:59.237625 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-13 03:30:59.237636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-13 03:30:59.237646 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-13 03:30:59.237657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-13 03:30:59.237667 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-13 03:30:59.237678 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-13 03:30:59.237689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-13 03:30:59.237699 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-13 03:30:59.237710 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-13 03:30:59.237721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-13 03:30:59.237731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-13 03:30:59.237742 | orchestrator | 2026-02-13 03:30:59.237753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237764 | orchestrator | Friday 13 February 2026 03:30:57 +0000 (0:00:00.420) 0:00:06.302 ******* 2026-02-13 03:30:59.237775 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237785 | orchestrator | 2026-02-13 03:30:59.237796 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237807 | orchestrator | Friday 13 February 2026 03:30:57 +0000 (0:00:00.211) 0:00:06.514 ******* 2026-02-13 03:30:59.237818 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237829 | orchestrator | 2026-02-13 03:30:59.237840 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237850 | orchestrator | Friday 13 February 2026 03:30:58 +0000 (0:00:00.225) 0:00:06.739 ******* 2026-02-13 03:30:59.237861 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237880 | orchestrator | 2026-02-13 03:30:59.237891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237902 | orchestrator | Friday 13 February 2026 03:30:58 +0000 (0:00:00.242) 0:00:06.982 ******* 2026-02-13 03:30:59.237912 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237923 | orchestrator | 2026-02-13 03:30:59.237934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237945 | orchestrator | Friday 13 February 2026 03:30:58 +0000 (0:00:00.232) 0:00:07.214 ******* 2026-02-13 03:30:59.237955 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.237966 | orchestrator | 2026-02-13 03:30:59.237977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.237988 | orchestrator | Friday 13 February 2026 03:30:58 +0000 (0:00:00.202) 0:00:07.417 ******* 2026-02-13 03:30:59.237998 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.238009 | orchestrator | 2026-02-13 03:30:59.238079 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:30:59.238092 | orchestrator | Friday 13 February 2026 03:30:59 +0000 (0:00:00.222) 0:00:07.639 ******* 2026-02-13 03:30:59.238103 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:30:59.238114 | orchestrator | 2026-02-13 03:30:59.238133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.384795 | orchestrator | Friday 13 February 2026 03:30:59 +0000 (0:00:00.196) 0:00:07.836 ******* 2026-02-13 03:31:07.384889 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.384900 | orchestrator | 2026-02-13 03:31:07.384909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.384917 | orchestrator | Friday 13 February 2026 03:30:59 +0000 (0:00:00.605) 0:00:08.441 ******* 2026-02-13 03:31:07.384925 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-13 03:31:07.384932 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-13 03:31:07.384940 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-13 03:31:07.384947 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-13 03:31:07.384954 | orchestrator | 2026-02-13 03:31:07.384961 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.384968 | orchestrator | Friday 13 February 2026 03:31:00 +0000 (0:00:00.654) 0:00:09.096 ******* 2026-02-13 03:31:07.384975 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.384981 | orchestrator | 2026-02-13 03:31:07.384988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.384995 | orchestrator | Friday 13 February 2026 03:31:00 +0000 (0:00:00.226) 0:00:09.322 ******* 2026-02-13 03:31:07.385002 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385009 | orchestrator | 2026-02-13 03:31:07.385031 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.385039 | orchestrator | Friday 13 February 2026 03:31:00 +0000 (0:00:00.220) 0:00:09.543 ******* 2026-02-13 03:31:07.385047 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385054 | orchestrator | 2026-02-13 03:31:07.385061 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:07.385068 | orchestrator | Friday 13 February 2026 03:31:01 +0000 (0:00:00.208) 0:00:09.752 ******* 2026-02-13 03:31:07.385075 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385082 | orchestrator | 2026-02-13 03:31:07.385089 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-13 03:31:07.385095 | orchestrator | Friday 13 February 2026 03:31:01 +0000 (0:00:00.218) 0:00:09.970 ******* 2026-02-13 03:31:07.385102 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385109 | orchestrator | 2026-02-13 03:31:07.385115 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-13 03:31:07.385122 | orchestrator | Friday 13 February 2026 03:31:01 +0000 (0:00:00.141) 0:00:10.111 ******* 2026-02-13 03:31:07.385130 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}}) 2026-02-13 03:31:07.385157 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7c5ad083-16ef-5861-9238-a28b124c66ab'}}) 2026-02-13 03:31:07.385164 | orchestrator | 2026-02-13 03:31:07.385171 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-13 03:31:07.385178 | orchestrator | Friday 13 February 2026 03:31:01 +0000 (0:00:00.201) 0:00:10.313 ******* 2026-02-13 03:31:07.385227 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 03:31:07.385237 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 03:31:07.385243 | orchestrator | 2026-02-13 03:31:07.385250 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-13 03:31:07.385256 | orchestrator | Friday 13 February 2026 03:31:03 +0000 (0:00:01.964) 0:00:12.277 ******* 2026-02-13 03:31:07.385262 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385275 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385280 | orchestrator | 2026-02-13 03:31:07.385286 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-13 03:31:07.385293 | orchestrator | Friday 13 February 2026 03:31:03 +0000 (0:00:00.164) 0:00:12.442 ******* 2026-02-13 03:31:07.385298 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 03:31:07.385305 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 03:31:07.385311 | orchestrator | 2026-02-13 03:31:07.385317 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-13 03:31:07.385323 | orchestrator | Friday 13 February 2026 03:31:05 +0000 (0:00:01.489) 0:00:13.932 ******* 2026-02-13 03:31:07.385329 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385342 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385348 | orchestrator | 2026-02-13 03:31:07.385355 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-13 03:31:07.385361 | orchestrator | Friday 13 February 2026 03:31:05 +0000 (0:00:00.167) 0:00:14.100 ******* 2026-02-13 03:31:07.385383 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385390 | orchestrator | 2026-02-13 03:31:07.385397 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-13 03:31:07.385403 | orchestrator | Friday 13 February 2026 03:31:05 +0000 (0:00:00.342) 0:00:14.443 ******* 2026-02-13 03:31:07.385409 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385416 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385422 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385428 | orchestrator | 2026-02-13 03:31:07.385435 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-13 03:31:07.385442 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.202) 0:00:14.645 ******* 2026-02-13 03:31:07.385455 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385462 | orchestrator | 2026-02-13 03:31:07.385468 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-13 03:31:07.385474 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.140) 0:00:14.786 ******* 2026-02-13 03:31:07.385486 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385493 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385499 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385505 | orchestrator | 2026-02-13 03:31:07.385511 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-13 03:31:07.385518 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.153) 0:00:14.939 ******* 2026-02-13 03:31:07.385524 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385531 | orchestrator | 2026-02-13 03:31:07.385537 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-13 03:31:07.385543 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.138) 0:00:15.077 ******* 2026-02-13 03:31:07.385550 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385556 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385562 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385569 | orchestrator | 2026-02-13 03:31:07.385574 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-13 03:31:07.385581 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.161) 0:00:15.239 ******* 2026-02-13 03:31:07.385588 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:07.385594 | orchestrator | 2026-02-13 03:31:07.385600 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-13 03:31:07.385606 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.149) 0:00:15.388 ******* 2026-02-13 03:31:07.385612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385618 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385625 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385631 | orchestrator | 2026-02-13 03:31:07.385637 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-13 03:31:07.385643 | orchestrator | Friday 13 February 2026 03:31:06 +0000 (0:00:00.157) 0:00:15.546 ******* 2026-02-13 03:31:07.385650 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385656 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385662 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385668 | orchestrator | 2026-02-13 03:31:07.385674 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-13 03:31:07.385680 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.158) 0:00:15.705 ******* 2026-02-13 03:31:07.385687 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:07.385693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:07.385704 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385710 | orchestrator | 2026-02-13 03:31:07.385716 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-13 03:31:07.385721 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.140) 0:00:15.845 ******* 2026-02-13 03:31:07.385727 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:07.385733 | orchestrator | 2026-02-13 03:31:07.385739 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-13 03:31:07.385750 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.140) 0:00:15.986 ******* 2026-02-13 03:31:13.801860 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.802001 | orchestrator | 2026-02-13 03:31:13.802102 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-13 03:31:13.802124 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.138) 0:00:16.124 ******* 2026-02-13 03:31:13.802144 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.802165 | orchestrator | 2026-02-13 03:31:13.802184 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-13 03:31:13.802259 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.329) 0:00:16.453 ******* 2026-02-13 03:31:13.802277 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:31:13.802290 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-13 03:31:13.802301 | orchestrator | } 2026-02-13 03:31:13.802313 | orchestrator | 2026-02-13 03:31:13.802324 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-13 03:31:13.802335 | orchestrator | Friday 13 February 2026 03:31:07 +0000 (0:00:00.150) 0:00:16.604 ******* 2026-02-13 03:31:13.802346 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:31:13.802357 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-13 03:31:13.802368 | orchestrator | } 2026-02-13 03:31:13.802379 | orchestrator | 2026-02-13 03:31:13.802390 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-13 03:31:13.802419 | orchestrator | Friday 13 February 2026 03:31:08 +0000 (0:00:00.159) 0:00:16.763 ******* 2026-02-13 03:31:13.802431 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:31:13.802442 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-13 03:31:13.802453 | orchestrator | } 2026-02-13 03:31:13.802464 | orchestrator | 2026-02-13 03:31:13.802475 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-13 03:31:13.802491 | orchestrator | Friday 13 February 2026 03:31:08 +0000 (0:00:00.160) 0:00:16.923 ******* 2026-02-13 03:31:13.802510 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:13.802527 | orchestrator | 2026-02-13 03:31:13.802545 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-13 03:31:13.802566 | orchestrator | Friday 13 February 2026 03:31:08 +0000 (0:00:00.659) 0:00:17.583 ******* 2026-02-13 03:31:13.802585 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:13.802604 | orchestrator | 2026-02-13 03:31:13.802619 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-13 03:31:13.802633 | orchestrator | Friday 13 February 2026 03:31:09 +0000 (0:00:00.521) 0:00:18.104 ******* 2026-02-13 03:31:13.802646 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:13.802658 | orchestrator | 2026-02-13 03:31:13.802670 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-13 03:31:13.802682 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.520) 0:00:18.625 ******* 2026-02-13 03:31:13.802695 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:13.802708 | orchestrator | 2026-02-13 03:31:13.802720 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-13 03:31:13.802733 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.151) 0:00:18.777 ******* 2026-02-13 03:31:13.802746 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.802758 | orchestrator | 2026-02-13 03:31:13.802771 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-13 03:31:13.802811 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.118) 0:00:18.895 ******* 2026-02-13 03:31:13.802824 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.802836 | orchestrator | 2026-02-13 03:31:13.802849 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-13 03:31:13.802861 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.109) 0:00:19.004 ******* 2026-02-13 03:31:13.802874 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:31:13.802887 | orchestrator |  "vgs_report": { 2026-02-13 03:31:13.802901 | orchestrator |  "vg": [] 2026-02-13 03:31:13.802921 | orchestrator |  } 2026-02-13 03:31:13.802940 | orchestrator | } 2026-02-13 03:31:13.802958 | orchestrator | 2026-02-13 03:31:13.802976 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-13 03:31:13.802994 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.143) 0:00:19.148 ******* 2026-02-13 03:31:13.803012 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803031 | orchestrator | 2026-02-13 03:31:13.803049 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-13 03:31:13.803068 | orchestrator | Friday 13 February 2026 03:31:10 +0000 (0:00:00.141) 0:00:19.290 ******* 2026-02-13 03:31:13.803086 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803106 | orchestrator | 2026-02-13 03:31:13.803124 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-13 03:31:13.803143 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.324) 0:00:19.614 ******* 2026-02-13 03:31:13.803154 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803165 | orchestrator | 2026-02-13 03:31:13.803176 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-13 03:31:13.803186 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.140) 0:00:19.754 ******* 2026-02-13 03:31:13.803218 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803230 | orchestrator | 2026-02-13 03:31:13.803240 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-13 03:31:13.803251 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.138) 0:00:19.893 ******* 2026-02-13 03:31:13.803262 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803272 | orchestrator | 2026-02-13 03:31:13.803283 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-13 03:31:13.803293 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.146) 0:00:20.039 ******* 2026-02-13 03:31:13.803304 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803315 | orchestrator | 2026-02-13 03:31:13.803326 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-13 03:31:13.803336 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.138) 0:00:20.178 ******* 2026-02-13 03:31:13.803347 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803358 | orchestrator | 2026-02-13 03:31:13.803369 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-13 03:31:13.803380 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.143) 0:00:20.322 ******* 2026-02-13 03:31:13.803412 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803423 | orchestrator | 2026-02-13 03:31:13.803434 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-13 03:31:13.803445 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.131) 0:00:20.453 ******* 2026-02-13 03:31:13.803456 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803467 | orchestrator | 2026-02-13 03:31:13.803478 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-13 03:31:13.803489 | orchestrator | Friday 13 February 2026 03:31:11 +0000 (0:00:00.129) 0:00:20.582 ******* 2026-02-13 03:31:13.803499 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803510 | orchestrator | 2026-02-13 03:31:13.803521 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-13 03:31:13.803532 | orchestrator | Friday 13 February 2026 03:31:12 +0000 (0:00:00.139) 0:00:20.722 ******* 2026-02-13 03:31:13.803553 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803564 | orchestrator | 2026-02-13 03:31:13.803575 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-13 03:31:13.803599 | orchestrator | Friday 13 February 2026 03:31:12 +0000 (0:00:00.140) 0:00:20.863 ******* 2026-02-13 03:31:13.803621 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803632 | orchestrator | 2026-02-13 03:31:13.803651 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-13 03:31:13.803662 | orchestrator | Friday 13 February 2026 03:31:12 +0000 (0:00:00.132) 0:00:20.995 ******* 2026-02-13 03:31:13.803673 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803683 | orchestrator | 2026-02-13 03:31:13.803694 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-13 03:31:13.803705 | orchestrator | Friday 13 February 2026 03:31:12 +0000 (0:00:00.136) 0:00:21.132 ******* 2026-02-13 03:31:13.803715 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803726 | orchestrator | 2026-02-13 03:31:13.803737 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-13 03:31:13.803747 | orchestrator | Friday 13 February 2026 03:31:12 +0000 (0:00:00.330) 0:00:21.462 ******* 2026-02-13 03:31:13.803759 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:13.803772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:13.803783 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803794 | orchestrator | 2026-02-13 03:31:13.803805 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-13 03:31:13.803815 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.159) 0:00:21.622 ******* 2026-02-13 03:31:13.803826 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:13.803837 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:13.803848 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803859 | orchestrator | 2026-02-13 03:31:13.803870 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-13 03:31:13.803881 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.157) 0:00:21.779 ******* 2026-02-13 03:31:13.803892 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:13.803902 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:13.803913 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803924 | orchestrator | 2026-02-13 03:31:13.803935 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-13 03:31:13.803946 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.157) 0:00:21.936 ******* 2026-02-13 03:31:13.803956 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:13.803967 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:13.803978 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.803989 | orchestrator | 2026-02-13 03:31:13.804000 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-13 03:31:13.804010 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.162) 0:00:22.099 ******* 2026-02-13 03:31:13.804028 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:13.804039 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:13.804050 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:13.804061 | orchestrator | 2026-02-13 03:31:13.804074 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-13 03:31:13.804093 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.159) 0:00:22.259 ******* 2026-02-13 03:31:13.804130 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.367736 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.367846 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.367863 | orchestrator | 2026-02-13 03:31:19.367877 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-13 03:31:19.367890 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.144) 0:00:22.403 ******* 2026-02-13 03:31:19.367901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.367913 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.367924 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.367935 | orchestrator | 2026-02-13 03:31:19.367962 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-13 03:31:19.367974 | orchestrator | Friday 13 February 2026 03:31:13 +0000 (0:00:00.160) 0:00:22.563 ******* 2026-02-13 03:31:19.367985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.367996 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.368008 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.368018 | orchestrator | 2026-02-13 03:31:19.368030 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-13 03:31:19.368041 | orchestrator | Friday 13 February 2026 03:31:14 +0000 (0:00:00.161) 0:00:22.725 ******* 2026-02-13 03:31:19.368052 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:19.368063 | orchestrator | 2026-02-13 03:31:19.368074 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-13 03:31:19.368085 | orchestrator | Friday 13 February 2026 03:31:14 +0000 (0:00:00.532) 0:00:23.257 ******* 2026-02-13 03:31:19.368096 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:19.368107 | orchestrator | 2026-02-13 03:31:19.368118 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-13 03:31:19.368128 | orchestrator | Friday 13 February 2026 03:31:15 +0000 (0:00:00.508) 0:00:23.766 ******* 2026-02-13 03:31:19.368139 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:31:19.368150 | orchestrator | 2026-02-13 03:31:19.368161 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-13 03:31:19.368173 | orchestrator | Friday 13 February 2026 03:31:15 +0000 (0:00:00.155) 0:00:23.921 ******* 2026-02-13 03:31:19.368185 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'vg_name': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 03:31:19.368265 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'vg_name': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 03:31:19.368318 | orchestrator | 2026-02-13 03:31:19.368332 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-13 03:31:19.368345 | orchestrator | Friday 13 February 2026 03:31:15 +0000 (0:00:00.181) 0:00:24.102 ******* 2026-02-13 03:31:19.368357 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.368370 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.368382 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.368395 | orchestrator | 2026-02-13 03:31:19.368407 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-13 03:31:19.368420 | orchestrator | Friday 13 February 2026 03:31:15 +0000 (0:00:00.373) 0:00:24.476 ******* 2026-02-13 03:31:19.368432 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.368445 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.368457 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.368469 | orchestrator | 2026-02-13 03:31:19.368482 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-13 03:31:19.368494 | orchestrator | Friday 13 February 2026 03:31:16 +0000 (0:00:00.168) 0:00:24.645 ******* 2026-02-13 03:31:19.368506 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 03:31:19.368519 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 03:31:19.368531 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:31:19.368543 | orchestrator | 2026-02-13 03:31:19.368555 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-13 03:31:19.368568 | orchestrator | Friday 13 February 2026 03:31:16 +0000 (0:00:00.163) 0:00:24.809 ******* 2026-02-13 03:31:19.368598 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:31:19.368611 | orchestrator |  "lvm_report": { 2026-02-13 03:31:19.368625 | orchestrator |  "lv": [ 2026-02-13 03:31:19.368637 | orchestrator |  { 2026-02-13 03:31:19.368647 | orchestrator |  "lv_name": "osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab", 2026-02-13 03:31:19.368659 | orchestrator |  "vg_name": "ceph-7c5ad083-16ef-5861-9238-a28b124c66ab" 2026-02-13 03:31:19.368670 | orchestrator |  }, 2026-02-13 03:31:19.368681 | orchestrator |  { 2026-02-13 03:31:19.368692 | orchestrator |  "lv_name": "osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f", 2026-02-13 03:31:19.368703 | orchestrator |  "vg_name": "ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f" 2026-02-13 03:31:19.368713 | orchestrator |  } 2026-02-13 03:31:19.368724 | orchestrator |  ], 2026-02-13 03:31:19.368735 | orchestrator |  "pv": [ 2026-02-13 03:31:19.368745 | orchestrator |  { 2026-02-13 03:31:19.368756 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-13 03:31:19.368767 | orchestrator |  "vg_name": "ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f" 2026-02-13 03:31:19.368778 | orchestrator |  }, 2026-02-13 03:31:19.368788 | orchestrator |  { 2026-02-13 03:31:19.368805 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-13 03:31:19.368817 | orchestrator |  "vg_name": "ceph-7c5ad083-16ef-5861-9238-a28b124c66ab" 2026-02-13 03:31:19.368827 | orchestrator |  } 2026-02-13 03:31:19.368838 | orchestrator |  ] 2026-02-13 03:31:19.368849 | orchestrator |  } 2026-02-13 03:31:19.368860 | orchestrator | } 2026-02-13 03:31:19.368879 | orchestrator | 2026-02-13 03:31:19.368890 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-13 03:31:19.368901 | orchestrator | 2026-02-13 03:31:19.368911 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:31:19.368923 | orchestrator | Friday 13 February 2026 03:31:16 +0000 (0:00:00.310) 0:00:25.119 ******* 2026-02-13 03:31:19.368933 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-13 03:31:19.368944 | orchestrator | 2026-02-13 03:31:19.368955 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:31:19.368966 | orchestrator | Friday 13 February 2026 03:31:16 +0000 (0:00:00.300) 0:00:25.419 ******* 2026-02-13 03:31:19.368977 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:19.368987 | orchestrator | 2026-02-13 03:31:19.368998 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369009 | orchestrator | Friday 13 February 2026 03:31:17 +0000 (0:00:00.325) 0:00:25.745 ******* 2026-02-13 03:31:19.369020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-13 03:31:19.369030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-13 03:31:19.369041 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-13 03:31:19.369052 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-13 03:31:19.369062 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-13 03:31:19.369073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-13 03:31:19.369084 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-13 03:31:19.369095 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-13 03:31:19.369105 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-13 03:31:19.369116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-13 03:31:19.369127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-13 03:31:19.369138 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-13 03:31:19.369149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-13 03:31:19.369159 | orchestrator | 2026-02-13 03:31:19.369170 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369181 | orchestrator | Friday 13 February 2026 03:31:17 +0000 (0:00:00.450) 0:00:26.195 ******* 2026-02-13 03:31:19.369191 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369225 | orchestrator | 2026-02-13 03:31:19.369237 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369248 | orchestrator | Friday 13 February 2026 03:31:17 +0000 (0:00:00.211) 0:00:26.406 ******* 2026-02-13 03:31:19.369259 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369270 | orchestrator | 2026-02-13 03:31:19.369281 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369292 | orchestrator | Friday 13 February 2026 03:31:18 +0000 (0:00:00.637) 0:00:27.044 ******* 2026-02-13 03:31:19.369302 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369313 | orchestrator | 2026-02-13 03:31:19.369324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369335 | orchestrator | Friday 13 February 2026 03:31:18 +0000 (0:00:00.232) 0:00:27.276 ******* 2026-02-13 03:31:19.369346 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369356 | orchestrator | 2026-02-13 03:31:19.369367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369378 | orchestrator | Friday 13 February 2026 03:31:18 +0000 (0:00:00.262) 0:00:27.539 ******* 2026-02-13 03:31:19.369396 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369407 | orchestrator | 2026-02-13 03:31:19.369418 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:19.369429 | orchestrator | Friday 13 February 2026 03:31:19 +0000 (0:00:00.215) 0:00:27.755 ******* 2026-02-13 03:31:19.369439 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:19.369450 | orchestrator | 2026-02-13 03:31:19.369468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580638 | orchestrator | Friday 13 February 2026 03:31:19 +0000 (0:00:00.213) 0:00:27.968 ******* 2026-02-13 03:31:30.580733 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.580744 | orchestrator | 2026-02-13 03:31:30.580753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580760 | orchestrator | Friday 13 February 2026 03:31:19 +0000 (0:00:00.214) 0:00:28.183 ******* 2026-02-13 03:31:30.580767 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.580774 | orchestrator | 2026-02-13 03:31:30.580781 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580788 | orchestrator | Friday 13 February 2026 03:31:19 +0000 (0:00:00.221) 0:00:28.405 ******* 2026-02-13 03:31:30.580795 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d) 2026-02-13 03:31:30.580803 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d) 2026-02-13 03:31:30.580810 | orchestrator | 2026-02-13 03:31:30.580831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580838 | orchestrator | Friday 13 February 2026 03:31:20 +0000 (0:00:00.438) 0:00:28.843 ******* 2026-02-13 03:31:30.580845 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788) 2026-02-13 03:31:30.580852 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788) 2026-02-13 03:31:30.580858 | orchestrator | 2026-02-13 03:31:30.580865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580872 | orchestrator | Friday 13 February 2026 03:31:20 +0000 (0:00:00.469) 0:00:29.313 ******* 2026-02-13 03:31:30.580878 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52) 2026-02-13 03:31:30.580885 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52) 2026-02-13 03:31:30.580892 | orchestrator | 2026-02-13 03:31:30.580898 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580905 | orchestrator | Friday 13 February 2026 03:31:21 +0000 (0:00:00.702) 0:00:30.015 ******* 2026-02-13 03:31:30.580912 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460) 2026-02-13 03:31:30.580918 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460) 2026-02-13 03:31:30.580925 | orchestrator | 2026-02-13 03:31:30.580932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:30.580938 | orchestrator | Friday 13 February 2026 03:31:22 +0000 (0:00:00.907) 0:00:30.922 ******* 2026-02-13 03:31:30.580945 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:31:30.580952 | orchestrator | 2026-02-13 03:31:30.580958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.580965 | orchestrator | Friday 13 February 2026 03:31:22 +0000 (0:00:00.368) 0:00:31.291 ******* 2026-02-13 03:31:30.580972 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-13 03:31:30.580979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-13 03:31:30.580986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-13 03:31:30.581010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-13 03:31:30.581017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-13 03:31:30.581024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-13 03:31:30.581030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-13 03:31:30.581037 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-13 03:31:30.581043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-13 03:31:30.581050 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-13 03:31:30.581056 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-13 03:31:30.581063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-13 03:31:30.581070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-13 03:31:30.581076 | orchestrator | 2026-02-13 03:31:30.581083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581089 | orchestrator | Friday 13 February 2026 03:31:23 +0000 (0:00:00.436) 0:00:31.727 ******* 2026-02-13 03:31:30.581096 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581103 | orchestrator | 2026-02-13 03:31:30.581109 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581116 | orchestrator | Friday 13 February 2026 03:31:23 +0000 (0:00:00.233) 0:00:31.961 ******* 2026-02-13 03:31:30.581122 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581129 | orchestrator | 2026-02-13 03:31:30.581136 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581142 | orchestrator | Friday 13 February 2026 03:31:23 +0000 (0:00:00.206) 0:00:32.167 ******* 2026-02-13 03:31:30.581149 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581156 | orchestrator | 2026-02-13 03:31:30.581175 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581183 | orchestrator | Friday 13 February 2026 03:31:23 +0000 (0:00:00.208) 0:00:32.375 ******* 2026-02-13 03:31:30.581191 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581199 | orchestrator | 2026-02-13 03:31:30.581206 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581254 | orchestrator | Friday 13 February 2026 03:31:23 +0000 (0:00:00.217) 0:00:32.593 ******* 2026-02-13 03:31:30.581262 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581270 | orchestrator | 2026-02-13 03:31:30.581278 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581286 | orchestrator | Friday 13 February 2026 03:31:24 +0000 (0:00:00.215) 0:00:32.809 ******* 2026-02-13 03:31:30.581293 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581301 | orchestrator | 2026-02-13 03:31:30.581309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581317 | orchestrator | Friday 13 February 2026 03:31:24 +0000 (0:00:00.208) 0:00:33.017 ******* 2026-02-13 03:31:30.581329 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581337 | orchestrator | 2026-02-13 03:31:30.581344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581352 | orchestrator | Friday 13 February 2026 03:31:24 +0000 (0:00:00.204) 0:00:33.222 ******* 2026-02-13 03:31:30.581360 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581367 | orchestrator | 2026-02-13 03:31:30.581375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581382 | orchestrator | Friday 13 February 2026 03:31:25 +0000 (0:00:00.623) 0:00:33.845 ******* 2026-02-13 03:31:30.581390 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-13 03:31:30.581404 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-13 03:31:30.581412 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-13 03:31:30.581420 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-13 03:31:30.581427 | orchestrator | 2026-02-13 03:31:30.581435 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581443 | orchestrator | Friday 13 February 2026 03:31:25 +0000 (0:00:00.719) 0:00:34.565 ******* 2026-02-13 03:31:30.581451 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581459 | orchestrator | 2026-02-13 03:31:30.581466 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581474 | orchestrator | Friday 13 February 2026 03:31:26 +0000 (0:00:00.218) 0:00:34.783 ******* 2026-02-13 03:31:30.581482 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581490 | orchestrator | 2026-02-13 03:31:30.581497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581505 | orchestrator | Friday 13 February 2026 03:31:26 +0000 (0:00:00.214) 0:00:34.997 ******* 2026-02-13 03:31:30.581513 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581521 | orchestrator | 2026-02-13 03:31:30.581529 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:30.581537 | orchestrator | Friday 13 February 2026 03:31:26 +0000 (0:00:00.227) 0:00:35.225 ******* 2026-02-13 03:31:30.581545 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581553 | orchestrator | 2026-02-13 03:31:30.581560 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-13 03:31:30.581567 | orchestrator | Friday 13 February 2026 03:31:26 +0000 (0:00:00.217) 0:00:35.442 ******* 2026-02-13 03:31:30.581573 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581580 | orchestrator | 2026-02-13 03:31:30.581587 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-13 03:31:30.581594 | orchestrator | Friday 13 February 2026 03:31:26 +0000 (0:00:00.140) 0:00:35.583 ******* 2026-02-13 03:31:30.581600 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}}) 2026-02-13 03:31:30.581610 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5ce47f09-4cf3-58ef-8e90-2b997425535f'}}) 2026-02-13 03:31:30.581622 | orchestrator | 2026-02-13 03:31:30.581633 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-13 03:31:30.581643 | orchestrator | Friday 13 February 2026 03:31:27 +0000 (0:00:00.202) 0:00:35.785 ******* 2026-02-13 03:31:30.581655 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 03:31:30.581667 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 03:31:30.581678 | orchestrator | 2026-02-13 03:31:30.581688 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-13 03:31:30.581699 | orchestrator | Friday 13 February 2026 03:31:29 +0000 (0:00:01.858) 0:00:37.644 ******* 2026-02-13 03:31:30.581709 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:30.581720 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:30.581732 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:30.581745 | orchestrator | 2026-02-13 03:31:30.581757 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-13 03:31:30.581769 | orchestrator | Friday 13 February 2026 03:31:29 +0000 (0:00:00.150) 0:00:37.794 ******* 2026-02-13 03:31:30.581781 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 03:31:30.581808 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 03:31:36.532344 | orchestrator | 2026-02-13 03:31:36.532457 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-13 03:31:36.532474 | orchestrator | Friday 13 February 2026 03:31:30 +0000 (0:00:01.383) 0:00:39.177 ******* 2026-02-13 03:31:36.532487 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.532501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.532512 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532524 | orchestrator | 2026-02-13 03:31:36.532553 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-13 03:31:36.532564 | orchestrator | Friday 13 February 2026 03:31:30 +0000 (0:00:00.360) 0:00:39.538 ******* 2026-02-13 03:31:36.532575 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532586 | orchestrator | 2026-02-13 03:31:36.532597 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-13 03:31:36.532608 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.157) 0:00:39.695 ******* 2026-02-13 03:31:36.532619 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.532630 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.532641 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532652 | orchestrator | 2026-02-13 03:31:36.532663 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-13 03:31:36.532674 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.151) 0:00:39.847 ******* 2026-02-13 03:31:36.532685 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532696 | orchestrator | 2026-02-13 03:31:36.532707 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-13 03:31:36.532717 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.146) 0:00:39.993 ******* 2026-02-13 03:31:36.532728 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.532739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.532751 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532770 | orchestrator | 2026-02-13 03:31:36.532795 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-13 03:31:36.532824 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.157) 0:00:40.151 ******* 2026-02-13 03:31:36.532842 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532862 | orchestrator | 2026-02-13 03:31:36.532881 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-13 03:31:36.532899 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.144) 0:00:40.295 ******* 2026-02-13 03:31:36.532919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.532937 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.532956 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.532974 | orchestrator | 2026-02-13 03:31:36.532985 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-13 03:31:36.533021 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.148) 0:00:40.444 ******* 2026-02-13 03:31:36.533032 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:36.533044 | orchestrator | 2026-02-13 03:31:36.533055 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-13 03:31:36.533066 | orchestrator | Friday 13 February 2026 03:31:31 +0000 (0:00:00.145) 0:00:40.590 ******* 2026-02-13 03:31:36.533079 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.533097 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.533123 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533146 | orchestrator | 2026-02-13 03:31:36.533163 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-13 03:31:36.533180 | orchestrator | Friday 13 February 2026 03:31:32 +0000 (0:00:00.150) 0:00:40.740 ******* 2026-02-13 03:31:36.533198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.533215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.533263 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533281 | orchestrator | 2026-02-13 03:31:36.533300 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-13 03:31:36.533344 | orchestrator | Friday 13 February 2026 03:31:32 +0000 (0:00:00.148) 0:00:40.889 ******* 2026-02-13 03:31:36.533365 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:36.533384 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:36.533402 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533413 | orchestrator | 2026-02-13 03:31:36.533424 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-13 03:31:36.533435 | orchestrator | Friday 13 February 2026 03:31:32 +0000 (0:00:00.161) 0:00:41.050 ******* 2026-02-13 03:31:36.533455 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533466 | orchestrator | 2026-02-13 03:31:36.533477 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-13 03:31:36.533488 | orchestrator | Friday 13 February 2026 03:31:32 +0000 (0:00:00.337) 0:00:41.388 ******* 2026-02-13 03:31:36.533498 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533509 | orchestrator | 2026-02-13 03:31:36.533520 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-13 03:31:36.533531 | orchestrator | Friday 13 February 2026 03:31:32 +0000 (0:00:00.149) 0:00:41.537 ******* 2026-02-13 03:31:36.533542 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.533552 | orchestrator | 2026-02-13 03:31:36.533563 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-13 03:31:36.533574 | orchestrator | Friday 13 February 2026 03:31:33 +0000 (0:00:00.168) 0:00:41.706 ******* 2026-02-13 03:31:36.533585 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:31:36.533595 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-13 03:31:36.533606 | orchestrator | } 2026-02-13 03:31:36.533617 | orchestrator | 2026-02-13 03:31:36.533628 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-13 03:31:36.533640 | orchestrator | Friday 13 February 2026 03:31:33 +0000 (0:00:00.154) 0:00:41.861 ******* 2026-02-13 03:31:36.533650 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:31:36.533661 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-13 03:31:36.533683 | orchestrator | } 2026-02-13 03:31:36.533694 | orchestrator | 2026-02-13 03:31:36.533705 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-13 03:31:36.533716 | orchestrator | Friday 13 February 2026 03:31:33 +0000 (0:00:00.161) 0:00:42.023 ******* 2026-02-13 03:31:36.533726 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:31:36.533737 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-13 03:31:36.533748 | orchestrator | } 2026-02-13 03:31:36.533759 | orchestrator | 2026-02-13 03:31:36.533769 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-13 03:31:36.533780 | orchestrator | Friday 13 February 2026 03:31:33 +0000 (0:00:00.147) 0:00:42.170 ******* 2026-02-13 03:31:36.533791 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:36.533801 | orchestrator | 2026-02-13 03:31:36.533812 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-13 03:31:36.533823 | orchestrator | Friday 13 February 2026 03:31:34 +0000 (0:00:00.523) 0:00:42.694 ******* 2026-02-13 03:31:36.533833 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:36.533844 | orchestrator | 2026-02-13 03:31:36.533855 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-13 03:31:36.533866 | orchestrator | Friday 13 February 2026 03:31:34 +0000 (0:00:00.557) 0:00:43.251 ******* 2026-02-13 03:31:36.533877 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:36.533887 | orchestrator | 2026-02-13 03:31:36.533898 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-13 03:31:36.533909 | orchestrator | Friday 13 February 2026 03:31:35 +0000 (0:00:00.556) 0:00:43.808 ******* 2026-02-13 03:31:36.533919 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:36.533930 | orchestrator | 2026-02-13 03:31:36.533940 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-13 03:31:36.533953 | orchestrator | Friday 13 February 2026 03:31:35 +0000 (0:00:00.157) 0:00:43.965 ******* 2026-02-13 03:31:36.533975 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534003 | orchestrator | 2026-02-13 03:31:36.534144 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-13 03:31:36.534168 | orchestrator | Friday 13 February 2026 03:31:35 +0000 (0:00:00.136) 0:00:44.101 ******* 2026-02-13 03:31:36.534187 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534205 | orchestrator | 2026-02-13 03:31:36.534248 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-13 03:31:36.534268 | orchestrator | Friday 13 February 2026 03:31:35 +0000 (0:00:00.315) 0:00:44.417 ******* 2026-02-13 03:31:36.534287 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:31:36.534306 | orchestrator |  "vgs_report": { 2026-02-13 03:31:36.534325 | orchestrator |  "vg": [] 2026-02-13 03:31:36.534344 | orchestrator |  } 2026-02-13 03:31:36.534363 | orchestrator | } 2026-02-13 03:31:36.534381 | orchestrator | 2026-02-13 03:31:36.534393 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-13 03:31:36.534404 | orchestrator | Friday 13 February 2026 03:31:35 +0000 (0:00:00.143) 0:00:44.561 ******* 2026-02-13 03:31:36.534415 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534425 | orchestrator | 2026-02-13 03:31:36.534436 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-13 03:31:36.534447 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.146) 0:00:44.707 ******* 2026-02-13 03:31:36.534457 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534468 | orchestrator | 2026-02-13 03:31:36.534479 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-13 03:31:36.534489 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.141) 0:00:44.849 ******* 2026-02-13 03:31:36.534500 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534510 | orchestrator | 2026-02-13 03:31:36.534521 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-13 03:31:36.534532 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.142) 0:00:44.991 ******* 2026-02-13 03:31:36.534556 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:36.534567 | orchestrator | 2026-02-13 03:31:36.534591 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-13 03:31:41.501002 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.140) 0:00:45.132 ******* 2026-02-13 03:31:41.501085 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501092 | orchestrator | 2026-02-13 03:31:41.501097 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-13 03:31:41.501102 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.156) 0:00:45.289 ******* 2026-02-13 03:31:41.501106 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501110 | orchestrator | 2026-02-13 03:31:41.501156 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-13 03:31:41.501161 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.131) 0:00:45.420 ******* 2026-02-13 03:31:41.501166 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501170 | orchestrator | 2026-02-13 03:31:41.501184 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-13 03:31:41.501189 | orchestrator | Friday 13 February 2026 03:31:36 +0000 (0:00:00.135) 0:00:45.556 ******* 2026-02-13 03:31:41.501193 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501197 | orchestrator | 2026-02-13 03:31:41.501201 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-13 03:31:41.501205 | orchestrator | Friday 13 February 2026 03:31:37 +0000 (0:00:00.143) 0:00:45.700 ******* 2026-02-13 03:31:41.501209 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501212 | orchestrator | 2026-02-13 03:31:41.501216 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-13 03:31:41.501220 | orchestrator | Friday 13 February 2026 03:31:37 +0000 (0:00:00.137) 0:00:45.837 ******* 2026-02-13 03:31:41.501278 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501282 | orchestrator | 2026-02-13 03:31:41.501286 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-13 03:31:41.501290 | orchestrator | Friday 13 February 2026 03:31:37 +0000 (0:00:00.369) 0:00:46.207 ******* 2026-02-13 03:31:41.501294 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501298 | orchestrator | 2026-02-13 03:31:41.501302 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-13 03:31:41.501305 | orchestrator | Friday 13 February 2026 03:31:37 +0000 (0:00:00.139) 0:00:46.347 ******* 2026-02-13 03:31:41.501309 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501313 | orchestrator | 2026-02-13 03:31:41.501316 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-13 03:31:41.501320 | orchestrator | Friday 13 February 2026 03:31:37 +0000 (0:00:00.157) 0:00:46.504 ******* 2026-02-13 03:31:41.501324 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501328 | orchestrator | 2026-02-13 03:31:41.501331 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-13 03:31:41.501335 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.155) 0:00:46.659 ******* 2026-02-13 03:31:41.501339 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501343 | orchestrator | 2026-02-13 03:31:41.501346 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-13 03:31:41.501350 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.148) 0:00:46.808 ******* 2026-02-13 03:31:41.501355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501364 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501368 | orchestrator | 2026-02-13 03:31:41.501372 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-13 03:31:41.501389 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.164) 0:00:46.972 ******* 2026-02-13 03:31:41.501393 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501397 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501401 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501405 | orchestrator | 2026-02-13 03:31:41.501409 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-13 03:31:41.501412 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.171) 0:00:47.143 ******* 2026-02-13 03:31:41.501416 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501420 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501424 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501428 | orchestrator | 2026-02-13 03:31:41.501432 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-13 03:31:41.501435 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.177) 0:00:47.321 ******* 2026-02-13 03:31:41.501439 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501443 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501447 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501451 | orchestrator | 2026-02-13 03:31:41.501465 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-13 03:31:41.501470 | orchestrator | Friday 13 February 2026 03:31:38 +0000 (0:00:00.153) 0:00:47.474 ******* 2026-02-13 03:31:41.501473 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501477 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501481 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501485 | orchestrator | 2026-02-13 03:31:41.501492 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-13 03:31:41.501496 | orchestrator | Friday 13 February 2026 03:31:39 +0000 (0:00:00.182) 0:00:47.657 ******* 2026-02-13 03:31:41.501499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501503 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501507 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501511 | orchestrator | 2026-02-13 03:31:41.501514 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-13 03:31:41.501518 | orchestrator | Friday 13 February 2026 03:31:39 +0000 (0:00:00.161) 0:00:47.818 ******* 2026-02-13 03:31:41.501522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501526 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501529 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501538 | orchestrator | 2026-02-13 03:31:41.501541 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-13 03:31:41.501545 | orchestrator | Friday 13 February 2026 03:31:39 +0000 (0:00:00.384) 0:00:48.203 ******* 2026-02-13 03:31:41.501549 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501553 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501556 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501560 | orchestrator | 2026-02-13 03:31:41.501564 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-13 03:31:41.501568 | orchestrator | Friday 13 February 2026 03:31:39 +0000 (0:00:00.168) 0:00:48.372 ******* 2026-02-13 03:31:41.501571 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:41.501576 | orchestrator | 2026-02-13 03:31:41.501580 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-13 03:31:41.501584 | orchestrator | Friday 13 February 2026 03:31:40 +0000 (0:00:00.547) 0:00:48.919 ******* 2026-02-13 03:31:41.501589 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:41.501593 | orchestrator | 2026-02-13 03:31:41.501597 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-13 03:31:41.501602 | orchestrator | Friday 13 February 2026 03:31:40 +0000 (0:00:00.541) 0:00:49.460 ******* 2026-02-13 03:31:41.501606 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:31:41.501610 | orchestrator | 2026-02-13 03:31:41.501615 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-13 03:31:41.501619 | orchestrator | Friday 13 February 2026 03:31:40 +0000 (0:00:00.152) 0:00:49.613 ******* 2026-02-13 03:31:41.501623 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'vg_name': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 03:31:41.501629 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'vg_name': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 03:31:41.501633 | orchestrator | 2026-02-13 03:31:41.501638 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-13 03:31:41.501642 | orchestrator | Friday 13 February 2026 03:31:41 +0000 (0:00:00.175) 0:00:49.788 ******* 2026-02-13 03:31:41.501647 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501651 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:41.501655 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:41.501660 | orchestrator | 2026-02-13 03:31:41.501664 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-13 03:31:41.501668 | orchestrator | Friday 13 February 2026 03:31:41 +0000 (0:00:00.163) 0:00:49.952 ******* 2026-02-13 03:31:41.501673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:41.501680 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:47.979732 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:47.979859 | orchestrator | 2026-02-13 03:31:47.979875 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-13 03:31:47.979889 | orchestrator | Friday 13 February 2026 03:31:41 +0000 (0:00:00.150) 0:00:50.102 ******* 2026-02-13 03:31:47.979900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 03:31:47.979951 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 03:31:47.979964 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:31:47.979975 | orchestrator | 2026-02-13 03:31:47.979987 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-13 03:31:47.979998 | orchestrator | Friday 13 February 2026 03:31:41 +0000 (0:00:00.161) 0:00:50.263 ******* 2026-02-13 03:31:47.980009 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:31:47.980020 | orchestrator |  "lvm_report": { 2026-02-13 03:31:47.980033 | orchestrator |  "lv": [ 2026-02-13 03:31:47.980043 | orchestrator |  { 2026-02-13 03:31:47.980054 | orchestrator |  "lv_name": "osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6", 2026-02-13 03:31:47.980066 | orchestrator |  "vg_name": "ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6" 2026-02-13 03:31:47.980077 | orchestrator |  }, 2026-02-13 03:31:47.980087 | orchestrator |  { 2026-02-13 03:31:47.980098 | orchestrator |  "lv_name": "osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f", 2026-02-13 03:31:47.980109 | orchestrator |  "vg_name": "ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f" 2026-02-13 03:31:47.980119 | orchestrator |  } 2026-02-13 03:31:47.980130 | orchestrator |  ], 2026-02-13 03:31:47.980141 | orchestrator |  "pv": [ 2026-02-13 03:31:47.980151 | orchestrator |  { 2026-02-13 03:31:47.980162 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-13 03:31:47.980173 | orchestrator |  "vg_name": "ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6" 2026-02-13 03:31:47.980184 | orchestrator |  }, 2026-02-13 03:31:47.980195 | orchestrator |  { 2026-02-13 03:31:47.980205 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-13 03:31:47.980216 | orchestrator |  "vg_name": "ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f" 2026-02-13 03:31:47.980227 | orchestrator |  } 2026-02-13 03:31:47.980316 | orchestrator |  ] 2026-02-13 03:31:47.980328 | orchestrator |  } 2026-02-13 03:31:47.980341 | orchestrator | } 2026-02-13 03:31:47.980353 | orchestrator | 2026-02-13 03:31:47.980366 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-13 03:31:47.980379 | orchestrator | 2026-02-13 03:31:47.980391 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 03:31:47.980403 | orchestrator | Friday 13 February 2026 03:31:41 +0000 (0:00:00.292) 0:00:50.555 ******* 2026-02-13 03:31:47.980416 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-13 03:31:47.980428 | orchestrator | 2026-02-13 03:31:47.980440 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-13 03:31:47.980453 | orchestrator | Friday 13 February 2026 03:31:42 +0000 (0:00:00.674) 0:00:51.230 ******* 2026-02-13 03:31:47.980465 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:31:47.980477 | orchestrator | 2026-02-13 03:31:47.980489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980501 | orchestrator | Friday 13 February 2026 03:31:42 +0000 (0:00:00.246) 0:00:51.476 ******* 2026-02-13 03:31:47.980514 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-13 03:31:47.980526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-13 03:31:47.980538 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-13 03:31:47.980551 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-13 03:31:47.980563 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-13 03:31:47.980574 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-13 03:31:47.980587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-13 03:31:47.980612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-13 03:31:47.980623 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-13 03:31:47.980634 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-13 03:31:47.980644 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-13 03:31:47.980655 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-13 03:31:47.980665 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-13 03:31:47.980676 | orchestrator | 2026-02-13 03:31:47.980686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980697 | orchestrator | Friday 13 February 2026 03:31:43 +0000 (0:00:00.426) 0:00:51.903 ******* 2026-02-13 03:31:47.980707 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.980718 | orchestrator | 2026-02-13 03:31:47.980729 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980739 | orchestrator | Friday 13 February 2026 03:31:43 +0000 (0:00:00.207) 0:00:52.110 ******* 2026-02-13 03:31:47.980750 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.980761 | orchestrator | 2026-02-13 03:31:47.980772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980800 | orchestrator | Friday 13 February 2026 03:31:43 +0000 (0:00:00.206) 0:00:52.317 ******* 2026-02-13 03:31:47.980818 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.980838 | orchestrator | 2026-02-13 03:31:47.980856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980873 | orchestrator | Friday 13 February 2026 03:31:43 +0000 (0:00:00.203) 0:00:52.520 ******* 2026-02-13 03:31:47.980891 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.980909 | orchestrator | 2026-02-13 03:31:47.980928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.980947 | orchestrator | Friday 13 February 2026 03:31:44 +0000 (0:00:00.194) 0:00:52.714 ******* 2026-02-13 03:31:47.980967 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.980986 | orchestrator | 2026-02-13 03:31:47.981005 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981025 | orchestrator | Friday 13 February 2026 03:31:44 +0000 (0:00:00.201) 0:00:52.915 ******* 2026-02-13 03:31:47.981037 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.981047 | orchestrator | 2026-02-13 03:31:47.981058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981068 | orchestrator | Friday 13 February 2026 03:31:44 +0000 (0:00:00.207) 0:00:53.122 ******* 2026-02-13 03:31:47.981079 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.981089 | orchestrator | 2026-02-13 03:31:47.981100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981111 | orchestrator | Friday 13 February 2026 03:31:44 +0000 (0:00:00.215) 0:00:53.338 ******* 2026-02-13 03:31:47.981121 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:47.981132 | orchestrator | 2026-02-13 03:31:47.981142 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981153 | orchestrator | Friday 13 February 2026 03:31:45 +0000 (0:00:00.633) 0:00:53.971 ******* 2026-02-13 03:31:47.981164 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d) 2026-02-13 03:31:47.981175 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d) 2026-02-13 03:31:47.981186 | orchestrator | 2026-02-13 03:31:47.981197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981207 | orchestrator | Friday 13 February 2026 03:31:45 +0000 (0:00:00.428) 0:00:54.399 ******* 2026-02-13 03:31:47.981279 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e) 2026-02-13 03:31:47.981303 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e) 2026-02-13 03:31:47.981314 | orchestrator | 2026-02-13 03:31:47.981325 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981335 | orchestrator | Friday 13 February 2026 03:31:46 +0000 (0:00:00.439) 0:00:54.838 ******* 2026-02-13 03:31:47.981346 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3) 2026-02-13 03:31:47.981357 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3) 2026-02-13 03:31:47.981367 | orchestrator | 2026-02-13 03:31:47.981378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981388 | orchestrator | Friday 13 February 2026 03:31:46 +0000 (0:00:00.479) 0:00:55.318 ******* 2026-02-13 03:31:47.981399 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d) 2026-02-13 03:31:47.981410 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d) 2026-02-13 03:31:47.981421 | orchestrator | 2026-02-13 03:31:47.981431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-13 03:31:47.981442 | orchestrator | Friday 13 February 2026 03:31:47 +0000 (0:00:00.460) 0:00:55.778 ******* 2026-02-13 03:31:47.981453 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-13 03:31:47.981463 | orchestrator | 2026-02-13 03:31:47.981474 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:47.981484 | orchestrator | Friday 13 February 2026 03:31:47 +0000 (0:00:00.366) 0:00:56.145 ******* 2026-02-13 03:31:47.981495 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-13 03:31:47.981505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-13 03:31:47.981516 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-13 03:31:47.981526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-13 03:31:47.981537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-13 03:31:47.981547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-13 03:31:47.981558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-13 03:31:47.981568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-13 03:31:47.981578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-13 03:31:47.981589 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-13 03:31:47.981600 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-13 03:31:47.981620 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-13 03:31:57.017121 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-13 03:31:57.017232 | orchestrator | 2026-02-13 03:31:57.017325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017337 | orchestrator | Friday 13 February 2026 03:31:47 +0000 (0:00:00.425) 0:00:56.570 ******* 2026-02-13 03:31:57.017347 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017358 | orchestrator | 2026-02-13 03:31:57.017368 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017394 | orchestrator | Friday 13 February 2026 03:31:48 +0000 (0:00:00.210) 0:00:56.781 ******* 2026-02-13 03:31:57.017404 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017434 | orchestrator | 2026-02-13 03:31:57.017444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017454 | orchestrator | Friday 13 February 2026 03:31:48 +0000 (0:00:00.227) 0:00:57.009 ******* 2026-02-13 03:31:57.017464 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017473 | orchestrator | 2026-02-13 03:31:57.017483 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017492 | orchestrator | Friday 13 February 2026 03:31:48 +0000 (0:00:00.227) 0:00:57.236 ******* 2026-02-13 03:31:57.017502 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017511 | orchestrator | 2026-02-13 03:31:57.017521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017530 | orchestrator | Friday 13 February 2026 03:31:48 +0000 (0:00:00.216) 0:00:57.453 ******* 2026-02-13 03:31:57.017540 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017549 | orchestrator | 2026-02-13 03:31:57.017559 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017568 | orchestrator | Friday 13 February 2026 03:31:49 +0000 (0:00:00.618) 0:00:58.071 ******* 2026-02-13 03:31:57.017578 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017587 | orchestrator | 2026-02-13 03:31:57.017597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017607 | orchestrator | Friday 13 February 2026 03:31:49 +0000 (0:00:00.239) 0:00:58.311 ******* 2026-02-13 03:31:57.017616 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017626 | orchestrator | 2026-02-13 03:31:57.017635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017645 | orchestrator | Friday 13 February 2026 03:31:49 +0000 (0:00:00.231) 0:00:58.543 ******* 2026-02-13 03:31:57.017655 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017667 | orchestrator | 2026-02-13 03:31:57.017678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017689 | orchestrator | Friday 13 February 2026 03:31:50 +0000 (0:00:00.209) 0:00:58.752 ******* 2026-02-13 03:31:57.017701 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-13 03:31:57.017713 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-13 03:31:57.017725 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-13 03:31:57.017737 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-13 03:31:57.017748 | orchestrator | 2026-02-13 03:31:57.017759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017770 | orchestrator | Friday 13 February 2026 03:31:50 +0000 (0:00:00.676) 0:00:59.428 ******* 2026-02-13 03:31:57.017782 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017793 | orchestrator | 2026-02-13 03:31:57.017804 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017816 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.213) 0:00:59.642 ******* 2026-02-13 03:31:57.017826 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017838 | orchestrator | 2026-02-13 03:31:57.017849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017861 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.211) 0:00:59.854 ******* 2026-02-13 03:31:57.017872 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017883 | orchestrator | 2026-02-13 03:31:57.017895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-13 03:31:57.017906 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.206) 0:01:00.060 ******* 2026-02-13 03:31:57.017917 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017928 | orchestrator | 2026-02-13 03:31:57.017939 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-13 03:31:57.017950 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.213) 0:01:00.273 ******* 2026-02-13 03:31:57.017961 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.017977 | orchestrator | 2026-02-13 03:31:57.018010 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-13 03:31:57.018101 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.137) 0:01:00.411 ******* 2026-02-13 03:31:57.018119 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8151fb69-3858-5887-af01-e0d44d84b3e6'}}) 2026-02-13 03:31:57.018134 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5f44536a-6e14-5adc-b1bb-0c010a1280f1'}}) 2026-02-13 03:31:57.018148 | orchestrator | 2026-02-13 03:31:57.018162 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-13 03:31:57.018179 | orchestrator | Friday 13 February 2026 03:31:51 +0000 (0:00:00.194) 0:01:00.605 ******* 2026-02-13 03:31:57.018196 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 03:31:57.018214 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 03:31:57.018230 | orchestrator | 2026-02-13 03:31:57.018313 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-13 03:31:57.018347 | orchestrator | Friday 13 February 2026 03:31:53 +0000 (0:00:01.930) 0:01:02.536 ******* 2026-02-13 03:31:57.018358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:31:57.018369 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:31:57.018379 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018388 | orchestrator | 2026-02-13 03:31:57.018406 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-13 03:31:57.018416 | orchestrator | Friday 13 February 2026 03:31:54 +0000 (0:00:00.365) 0:01:02.902 ******* 2026-02-13 03:31:57.018426 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 03:31:57.018435 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 03:31:57.018445 | orchestrator | 2026-02-13 03:31:57.018454 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-13 03:31:57.018464 | orchestrator | Friday 13 February 2026 03:31:55 +0000 (0:00:01.334) 0:01:04.236 ******* 2026-02-13 03:31:57.018473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:31:57.018483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:31:57.018493 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018502 | orchestrator | 2026-02-13 03:31:57.018512 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-13 03:31:57.018521 | orchestrator | Friday 13 February 2026 03:31:55 +0000 (0:00:00.163) 0:01:04.400 ******* 2026-02-13 03:31:57.018531 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018540 | orchestrator | 2026-02-13 03:31:57.018550 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-13 03:31:57.018559 | orchestrator | Friday 13 February 2026 03:31:55 +0000 (0:00:00.145) 0:01:04.546 ******* 2026-02-13 03:31:57.018569 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:31:57.018579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:31:57.018599 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018609 | orchestrator | 2026-02-13 03:31:57.018618 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-13 03:31:57.018628 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.160) 0:01:04.707 ******* 2026-02-13 03:31:57.018637 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018647 | orchestrator | 2026-02-13 03:31:57.018657 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-13 03:31:57.018666 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.145) 0:01:04.852 ******* 2026-02-13 03:31:57.018675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:31:57.018685 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:31:57.018695 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018704 | orchestrator | 2026-02-13 03:31:57.018714 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-13 03:31:57.018723 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.161) 0:01:05.014 ******* 2026-02-13 03:31:57.018733 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018742 | orchestrator | 2026-02-13 03:31:57.018752 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-13 03:31:57.018761 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.144) 0:01:05.158 ******* 2026-02-13 03:31:57.018771 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:31:57.018780 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:31:57.018790 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:31:57.018799 | orchestrator | 2026-02-13 03:31:57.018809 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-13 03:31:57.018818 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.163) 0:01:05.321 ******* 2026-02-13 03:31:57.018828 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:31:57.018838 | orchestrator | 2026-02-13 03:31:57.018847 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-13 03:31:57.018857 | orchestrator | Friday 13 February 2026 03:31:56 +0000 (0:00:00.140) 0:01:05.462 ******* 2026-02-13 03:31:57.018872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:03.604549 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:03.604655 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604670 | orchestrator | 2026-02-13 03:32:03.604683 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-13 03:32:03.604695 | orchestrator | Friday 13 February 2026 03:31:57 +0000 (0:00:00.155) 0:01:05.618 ******* 2026-02-13 03:32:03.604722 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:03.604732 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:03.604743 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604753 | orchestrator | 2026-02-13 03:32:03.604763 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-13 03:32:03.604773 | orchestrator | Friday 13 February 2026 03:31:57 +0000 (0:00:00.160) 0:01:05.779 ******* 2026-02-13 03:32:03.604804 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:03.604831 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:03.604842 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604861 | orchestrator | 2026-02-13 03:32:03.604872 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-13 03:32:03.604882 | orchestrator | Friday 13 February 2026 03:31:57 +0000 (0:00:00.375) 0:01:06.154 ******* 2026-02-13 03:32:03.604892 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604901 | orchestrator | 2026-02-13 03:32:03.604911 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-13 03:32:03.604921 | orchestrator | Friday 13 February 2026 03:31:57 +0000 (0:00:00.147) 0:01:06.301 ******* 2026-02-13 03:32:03.604931 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604941 | orchestrator | 2026-02-13 03:32:03.604952 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-13 03:32:03.604961 | orchestrator | Friday 13 February 2026 03:31:57 +0000 (0:00:00.164) 0:01:06.466 ******* 2026-02-13 03:32:03.604971 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.604981 | orchestrator | 2026-02-13 03:32:03.604991 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-13 03:32:03.605000 | orchestrator | Friday 13 February 2026 03:31:58 +0000 (0:00:00.146) 0:01:06.612 ******* 2026-02-13 03:32:03.605010 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:32:03.605021 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-13 03:32:03.605031 | orchestrator | } 2026-02-13 03:32:03.605041 | orchestrator | 2026-02-13 03:32:03.605051 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-13 03:32:03.605061 | orchestrator | Friday 13 February 2026 03:31:58 +0000 (0:00:00.171) 0:01:06.784 ******* 2026-02-13 03:32:03.605070 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:32:03.605080 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-13 03:32:03.605092 | orchestrator | } 2026-02-13 03:32:03.605103 | orchestrator | 2026-02-13 03:32:03.605115 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-13 03:32:03.605126 | orchestrator | Friday 13 February 2026 03:31:58 +0000 (0:00:00.159) 0:01:06.944 ******* 2026-02-13 03:32:03.605138 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:32:03.605150 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-13 03:32:03.605161 | orchestrator | } 2026-02-13 03:32:03.605172 | orchestrator | 2026-02-13 03:32:03.605184 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-13 03:32:03.605194 | orchestrator | Friday 13 February 2026 03:31:58 +0000 (0:00:00.151) 0:01:07.095 ******* 2026-02-13 03:32:03.605207 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:03.605218 | orchestrator | 2026-02-13 03:32:03.605229 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-13 03:32:03.605241 | orchestrator | Friday 13 February 2026 03:31:59 +0000 (0:00:00.550) 0:01:07.645 ******* 2026-02-13 03:32:03.605274 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:03.605286 | orchestrator | 2026-02-13 03:32:03.605298 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-13 03:32:03.605309 | orchestrator | Friday 13 February 2026 03:31:59 +0000 (0:00:00.529) 0:01:08.175 ******* 2026-02-13 03:32:03.605320 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:03.605332 | orchestrator | 2026-02-13 03:32:03.605344 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-13 03:32:03.605356 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.509) 0:01:08.684 ******* 2026-02-13 03:32:03.605367 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:03.605378 | orchestrator | 2026-02-13 03:32:03.605390 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-13 03:32:03.605409 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.157) 0:01:08.841 ******* 2026-02-13 03:32:03.605421 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605433 | orchestrator | 2026-02-13 03:32:03.605445 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-13 03:32:03.605455 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.122) 0:01:08.963 ******* 2026-02-13 03:32:03.605465 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605474 | orchestrator | 2026-02-13 03:32:03.605484 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-13 03:32:03.605494 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.335) 0:01:09.299 ******* 2026-02-13 03:32:03.605504 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:32:03.605513 | orchestrator |  "vgs_report": { 2026-02-13 03:32:03.605524 | orchestrator |  "vg": [] 2026-02-13 03:32:03.605551 | orchestrator |  } 2026-02-13 03:32:03.605562 | orchestrator | } 2026-02-13 03:32:03.605572 | orchestrator | 2026-02-13 03:32:03.605582 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-13 03:32:03.605591 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.157) 0:01:09.457 ******* 2026-02-13 03:32:03.605601 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605611 | orchestrator | 2026-02-13 03:32:03.605620 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-13 03:32:03.605630 | orchestrator | Friday 13 February 2026 03:32:00 +0000 (0:00:00.148) 0:01:09.605 ******* 2026-02-13 03:32:03.605645 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605655 | orchestrator | 2026-02-13 03:32:03.605664 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-13 03:32:03.605674 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.157) 0:01:09.762 ******* 2026-02-13 03:32:03.605684 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605693 | orchestrator | 2026-02-13 03:32:03.605703 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-13 03:32:03.605713 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.147) 0:01:09.910 ******* 2026-02-13 03:32:03.605722 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605732 | orchestrator | 2026-02-13 03:32:03.605742 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-13 03:32:03.605751 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.141) 0:01:10.052 ******* 2026-02-13 03:32:03.605761 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605771 | orchestrator | 2026-02-13 03:32:03.605780 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-13 03:32:03.605790 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.142) 0:01:10.194 ******* 2026-02-13 03:32:03.605800 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605809 | orchestrator | 2026-02-13 03:32:03.605819 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-13 03:32:03.605828 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.131) 0:01:10.326 ******* 2026-02-13 03:32:03.605838 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605847 | orchestrator | 2026-02-13 03:32:03.605857 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-13 03:32:03.605867 | orchestrator | Friday 13 February 2026 03:32:01 +0000 (0:00:00.145) 0:01:10.472 ******* 2026-02-13 03:32:03.605876 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605886 | orchestrator | 2026-02-13 03:32:03.605896 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-13 03:32:03.605906 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.143) 0:01:10.616 ******* 2026-02-13 03:32:03.605915 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605925 | orchestrator | 2026-02-13 03:32:03.605935 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-13 03:32:03.605945 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.129) 0:01:10.746 ******* 2026-02-13 03:32:03.605961 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.605970 | orchestrator | 2026-02-13 03:32:03.605980 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-13 03:32:03.605990 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.165) 0:01:10.912 ******* 2026-02-13 03:32:03.605999 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606009 | orchestrator | 2026-02-13 03:32:03.606075 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-13 03:32:03.606087 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.347) 0:01:11.259 ******* 2026-02-13 03:32:03.606097 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606107 | orchestrator | 2026-02-13 03:32:03.606116 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-13 03:32:03.606126 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.153) 0:01:11.412 ******* 2026-02-13 03:32:03.606136 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606148 | orchestrator | 2026-02-13 03:32:03.606163 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-13 03:32:03.606179 | orchestrator | Friday 13 February 2026 03:32:02 +0000 (0:00:00.153) 0:01:11.565 ******* 2026-02-13 03:32:03.606195 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606209 | orchestrator | 2026-02-13 03:32:03.606224 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-13 03:32:03.606239 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.137) 0:01:11.703 ******* 2026-02-13 03:32:03.606275 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:03.606293 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:03.606310 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606327 | orchestrator | 2026-02-13 03:32:03.606342 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-13 03:32:03.606359 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.171) 0:01:11.874 ******* 2026-02-13 03:32:03.606370 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:03.606379 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:03.606389 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:03.606399 | orchestrator | 2026-02-13 03:32:03.606408 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-13 03:32:03.606418 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.171) 0:01:12.046 ******* 2026-02-13 03:32:03.606437 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.665746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.665850 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.665864 | orchestrator | 2026-02-13 03:32:06.665892 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-13 03:32:06.665904 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.157) 0:01:12.204 ******* 2026-02-13 03:32:06.665929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.665949 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.665981 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.665992 | orchestrator | 2026-02-13 03:32:06.666002 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-13 03:32:06.666012 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.154) 0:01:12.359 ******* 2026-02-13 03:32:06.666077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666087 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666097 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666107 | orchestrator | 2026-02-13 03:32:06.666117 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-13 03:32:06.666155 | orchestrator | Friday 13 February 2026 03:32:03 +0000 (0:00:00.165) 0:01:12.525 ******* 2026-02-13 03:32:06.666166 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666176 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666186 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666195 | orchestrator | 2026-02-13 03:32:06.666205 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-13 03:32:06.666215 | orchestrator | Friday 13 February 2026 03:32:04 +0000 (0:00:00.163) 0:01:12.688 ******* 2026-02-13 03:32:06.666224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666243 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666274 | orchestrator | 2026-02-13 03:32:06.666284 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-13 03:32:06.666295 | orchestrator | Friday 13 February 2026 03:32:04 +0000 (0:00:00.159) 0:01:12.848 ******* 2026-02-13 03:32:06.666307 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666330 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666341 | orchestrator | 2026-02-13 03:32:06.666353 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-13 03:32:06.666364 | orchestrator | Friday 13 February 2026 03:32:04 +0000 (0:00:00.175) 0:01:13.023 ******* 2026-02-13 03:32:06.666375 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:06.666387 | orchestrator | 2026-02-13 03:32:06.666398 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-13 03:32:06.666409 | orchestrator | Friday 13 February 2026 03:32:05 +0000 (0:00:00.732) 0:01:13.755 ******* 2026-02-13 03:32:06.666420 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:06.666431 | orchestrator | 2026-02-13 03:32:06.666442 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-13 03:32:06.666454 | orchestrator | Friday 13 February 2026 03:32:05 +0000 (0:00:00.508) 0:01:14.264 ******* 2026-02-13 03:32:06.666465 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:06.666477 | orchestrator | 2026-02-13 03:32:06.666488 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-13 03:32:06.666499 | orchestrator | Friday 13 February 2026 03:32:05 +0000 (0:00:00.149) 0:01:14.413 ******* 2026-02-13 03:32:06.666518 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'vg_name': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 03:32:06.666531 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'vg_name': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 03:32:06.666543 | orchestrator | 2026-02-13 03:32:06.666554 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-13 03:32:06.666566 | orchestrator | Friday 13 February 2026 03:32:05 +0000 (0:00:00.181) 0:01:14.594 ******* 2026-02-13 03:32:06.666594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666624 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666636 | orchestrator | 2026-02-13 03:32:06.666648 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-13 03:32:06.666659 | orchestrator | Friday 13 February 2026 03:32:06 +0000 (0:00:00.161) 0:01:14.755 ******* 2026-02-13 03:32:06.666668 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666678 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666688 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666697 | orchestrator | 2026-02-13 03:32:06.666707 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-13 03:32:06.666717 | orchestrator | Friday 13 February 2026 03:32:06 +0000 (0:00:00.161) 0:01:14.917 ******* 2026-02-13 03:32:06.666726 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 03:32:06.666736 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 03:32:06.666746 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:06.666755 | orchestrator | 2026-02-13 03:32:06.666765 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-13 03:32:06.666775 | orchestrator | Friday 13 February 2026 03:32:06 +0000 (0:00:00.166) 0:01:15.084 ******* 2026-02-13 03:32:06.666784 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:32:06.666794 | orchestrator |  "lvm_report": { 2026-02-13 03:32:06.666804 | orchestrator |  "lv": [ 2026-02-13 03:32:06.666814 | orchestrator |  { 2026-02-13 03:32:06.666824 | orchestrator |  "lv_name": "osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1", 2026-02-13 03:32:06.666834 | orchestrator |  "vg_name": "ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1" 2026-02-13 03:32:06.666844 | orchestrator |  }, 2026-02-13 03:32:06.666854 | orchestrator |  { 2026-02-13 03:32:06.666864 | orchestrator |  "lv_name": "osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6", 2026-02-13 03:32:06.666874 | orchestrator |  "vg_name": "ceph-8151fb69-3858-5887-af01-e0d44d84b3e6" 2026-02-13 03:32:06.666883 | orchestrator |  } 2026-02-13 03:32:06.666893 | orchestrator |  ], 2026-02-13 03:32:06.666903 | orchestrator |  "pv": [ 2026-02-13 03:32:06.666912 | orchestrator |  { 2026-02-13 03:32:06.666921 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-13 03:32:06.666931 | orchestrator |  "vg_name": "ceph-8151fb69-3858-5887-af01-e0d44d84b3e6" 2026-02-13 03:32:06.666941 | orchestrator |  }, 2026-02-13 03:32:06.666950 | orchestrator |  { 2026-02-13 03:32:06.666960 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-13 03:32:06.666979 | orchestrator |  "vg_name": "ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1" 2026-02-13 03:32:06.666989 | orchestrator |  } 2026-02-13 03:32:06.666999 | orchestrator |  ] 2026-02-13 03:32:06.667008 | orchestrator |  } 2026-02-13 03:32:06.667018 | orchestrator | } 2026-02-13 03:32:06.667028 | orchestrator | 2026-02-13 03:32:06.667038 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:32:06.667048 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-13 03:32:06.667058 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-13 03:32:06.667068 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-13 03:32:06.667078 | orchestrator | 2026-02-13 03:32:06.667088 | orchestrator | 2026-02-13 03:32:06.667097 | orchestrator | 2026-02-13 03:32:06.667107 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:32:06.667116 | orchestrator | Friday 13 February 2026 03:32:06 +0000 (0:00:00.157) 0:01:15.241 ******* 2026-02-13 03:32:06.667126 | orchestrator | =============================================================================== 2026-02-13 03:32:06.667136 | orchestrator | Create block VGs -------------------------------------------------------- 5.75s 2026-02-13 03:32:06.667145 | orchestrator | Create block LVs -------------------------------------------------------- 4.21s 2026-02-13 03:32:06.667155 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.81s 2026-02-13 03:32:06.667164 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.73s 2026-02-13 03:32:06.667174 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2026-02-13 03:32:06.667184 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2026-02-13 03:32:06.667193 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2026-02-13 03:32:06.667203 | orchestrator | Add known links to the list of available block devices ------------------ 1.39s 2026-02-13 03:32:06.667218 | orchestrator | Add known partitions to the list of available block devices ------------- 1.28s 2026-02-13 03:32:06.983407 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.22s 2026-02-13 03:32:06.983556 | orchestrator | Add known links to the list of available block devices ------------------ 0.91s 2026-02-13 03:32:06.983581 | orchestrator | Add known links to the list of available block devices ------------------ 0.84s 2026-02-13 03:32:06.983627 | orchestrator | Get initial list of available block devices ----------------------------- 0.82s 2026-02-13 03:32:06.983646 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.76s 2026-02-13 03:32:06.983663 | orchestrator | Print LVM report data --------------------------------------------------- 0.76s 2026-02-13 03:32:06.983682 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-13 03:32:06.983700 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.70s 2026-02-13 03:32:06.983717 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-13 03:32:06.983734 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.70s 2026-02-13 03:32:06.983753 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.69s 2026-02-13 03:32:19.450876 | orchestrator | 2026-02-13 03:32:19 | INFO  | Task 49f95582-4129-4354-802f-8bb7d289d2a2 (facts) was prepared for execution. 2026-02-13 03:32:19.451017 | orchestrator | 2026-02-13 03:32:19 | INFO  | It takes a moment until task 49f95582-4129-4354-802f-8bb7d289d2a2 (facts) has been started and output is visible here. 2026-02-13 03:32:32.426923 | orchestrator | 2026-02-13 03:32:32.427023 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-13 03:32:32.427056 | orchestrator | 2026-02-13 03:32:32.427067 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-13 03:32:32.427075 | orchestrator | Friday 13 February 2026 03:32:23 +0000 (0:00:00.267) 0:00:00.267 ******* 2026-02-13 03:32:32.427083 | orchestrator | ok: [testbed-manager] 2026-02-13 03:32:32.427092 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:32.427100 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:32.427108 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:32.427116 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:32.427124 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:32.427132 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:32.427139 | orchestrator | 2026-02-13 03:32:32.427147 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-13 03:32:32.427155 | orchestrator | Friday 13 February 2026 03:32:24 +0000 (0:00:01.129) 0:00:01.397 ******* 2026-02-13 03:32:32.427163 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:32:32.427172 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:32:32.427180 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:32:32.427188 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:32:32.427196 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:32.427204 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:32:32.427212 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:32.427220 | orchestrator | 2026-02-13 03:32:32.427228 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 03:32:32.427236 | orchestrator | 2026-02-13 03:32:32.427244 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 03:32:32.427252 | orchestrator | Friday 13 February 2026 03:32:26 +0000 (0:00:01.277) 0:00:02.675 ******* 2026-02-13 03:32:32.427260 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:32.427267 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:32.427332 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:32.427341 | orchestrator | ok: [testbed-manager] 2026-02-13 03:32:32.427349 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:32.427357 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:32.427364 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:32.427372 | orchestrator | 2026-02-13 03:32:32.427380 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-13 03:32:32.427388 | orchestrator | 2026-02-13 03:32:32.427396 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-13 03:32:32.427404 | orchestrator | Friday 13 February 2026 03:32:31 +0000 (0:00:05.347) 0:00:08.022 ******* 2026-02-13 03:32:32.427412 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:32:32.427420 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:32:32.427428 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:32:32.427436 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:32:32.427444 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:32.427452 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:32:32.427460 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:32.427467 | orchestrator | 2026-02-13 03:32:32.427476 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:32:32.427484 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427494 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427504 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427513 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427523 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427539 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427548 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:32:32.427557 | orchestrator | 2026-02-13 03:32:32.427567 | orchestrator | 2026-02-13 03:32:32.427576 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:32:32.427597 | orchestrator | Friday 13 February 2026 03:32:32 +0000 (0:00:00.551) 0:00:08.574 ******* 2026-02-13 03:32:32.427607 | orchestrator | =============================================================================== 2026-02-13 03:32:32.427615 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2026-02-13 03:32:32.427625 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2026-02-13 03:32:32.427634 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2026-02-13 03:32:32.427644 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-13 03:32:34.752792 | orchestrator | 2026-02-13 03:32:34 | INFO  | Task 188c9f38-8412-47ee-94a5-33aa042959f1 (ceph) was prepared for execution. 2026-02-13 03:32:34.752903 | orchestrator | 2026-02-13 03:32:34 | INFO  | It takes a moment until task 188c9f38-8412-47ee-94a5-33aa042959f1 (ceph) has been started and output is visible here. 2026-02-13 03:32:52.499905 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 03:32:52.500000 | orchestrator | 2.16.14 2026-02-13 03:32:52.500013 | orchestrator | 2026-02-13 03:32:52.500020 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-13 03:32:52.500028 | orchestrator | 2026-02-13 03:32:52.500035 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 03:32:52.500043 | orchestrator | Friday 13 February 2026 03:32:39 +0000 (0:00:00.833) 0:00:00.833 ******* 2026-02-13 03:32:52.500051 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:32:52.500059 | orchestrator | 2026-02-13 03:32:52.500066 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 03:32:52.500073 | orchestrator | Friday 13 February 2026 03:32:40 +0000 (0:00:01.150) 0:00:01.983 ******* 2026-02-13 03:32:52.500080 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500087 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500094 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500101 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500108 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500115 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500122 | orchestrator | 2026-02-13 03:32:52.500129 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 03:32:52.500136 | orchestrator | Friday 13 February 2026 03:32:42 +0000 (0:00:01.273) 0:00:03.257 ******* 2026-02-13 03:32:52.500144 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500151 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500158 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500164 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500171 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500178 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500185 | orchestrator | 2026-02-13 03:32:52.500192 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 03:32:52.500198 | orchestrator | Friday 13 February 2026 03:32:42 +0000 (0:00:00.745) 0:00:04.003 ******* 2026-02-13 03:32:52.500204 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500210 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500215 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500222 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500249 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500257 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500263 | orchestrator | 2026-02-13 03:32:52.500270 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 03:32:52.500277 | orchestrator | Friday 13 February 2026 03:32:43 +0000 (0:00:00.971) 0:00:04.974 ******* 2026-02-13 03:32:52.500284 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500327 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500335 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500341 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500348 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500355 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500362 | orchestrator | 2026-02-13 03:32:52.500369 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 03:32:52.500375 | orchestrator | Friday 13 February 2026 03:32:44 +0000 (0:00:00.755) 0:00:05.730 ******* 2026-02-13 03:32:52.500382 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500388 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500395 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500402 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500409 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500415 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500422 | orchestrator | 2026-02-13 03:32:52.500428 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 03:32:52.500435 | orchestrator | Friday 13 February 2026 03:32:45 +0000 (0:00:00.583) 0:00:06.314 ******* 2026-02-13 03:32:52.500442 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500449 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500456 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500463 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500470 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500479 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500485 | orchestrator | 2026-02-13 03:32:52.500492 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 03:32:52.500498 | orchestrator | Friday 13 February 2026 03:32:46 +0000 (0:00:00.782) 0:00:07.096 ******* 2026-02-13 03:32:52.500505 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:52.500512 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:32:52.500518 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:32:52.500526 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:32:52.500532 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:32:52.500538 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:32:52.500544 | orchestrator | 2026-02-13 03:32:52.500550 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 03:32:52.500556 | orchestrator | Friday 13 February 2026 03:32:46 +0000 (0:00:00.584) 0:00:07.681 ******* 2026-02-13 03:32:52.500563 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500568 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500575 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500581 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500587 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500607 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500613 | orchestrator | 2026-02-13 03:32:52.500619 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 03:32:52.500626 | orchestrator | Friday 13 February 2026 03:32:47 +0000 (0:00:00.756) 0:00:08.438 ******* 2026-02-13 03:32:52.500634 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:32:52.500640 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:32:52.500646 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:32:52.500653 | orchestrator | 2026-02-13 03:32:52.500659 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 03:32:52.500665 | orchestrator | Friday 13 February 2026 03:32:48 +0000 (0:00:00.642) 0:00:09.080 ******* 2026-02-13 03:32:52.500680 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:32:52.500686 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:32:52.500692 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:32:52.500714 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:32:52.500722 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:32:52.500730 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:32:52.500737 | orchestrator | 2026-02-13 03:32:52.500743 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 03:32:52.500749 | orchestrator | Friday 13 February 2026 03:32:48 +0000 (0:00:00.763) 0:00:09.843 ******* 2026-02-13 03:32:52.500756 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:32:52.500762 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:32:52.500770 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:32:52.500777 | orchestrator | 2026-02-13 03:32:52.500783 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 03:32:52.500790 | orchestrator | Friday 13 February 2026 03:32:51 +0000 (0:00:02.285) 0:00:12.129 ******* 2026-02-13 03:32:52.500797 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 03:32:52.500804 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 03:32:52.500810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 03:32:52.500819 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:52.500825 | orchestrator | 2026-02-13 03:32:52.500832 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 03:32:52.500838 | orchestrator | Friday 13 February 2026 03:32:51 +0000 (0:00:00.446) 0:00:12.575 ******* 2026-02-13 03:32:52.500847 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500869 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:52.500876 | orchestrator | 2026-02-13 03:32:52.500882 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 03:32:52.500888 | orchestrator | Friday 13 February 2026 03:32:52 +0000 (0:00:00.600) 0:00:13.176 ******* 2026-02-13 03:32:52.500896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500911 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:32:52.500923 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:32:52.500929 | orchestrator | 2026-02-13 03:32:52.500941 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 03:32:52.500948 | orchestrator | Friday 13 February 2026 03:32:52 +0000 (0:00:00.154) 0:00:13.331 ******* 2026-02-13 03:32:52.500962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 03:32:49.656093', 'end': '2026-02-13 03:32:49.699768', 'delta': '0:00:00.043675', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 03:33:02.065182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 03:32:50.190604', 'end': '2026-02-13 03:32:50.236036', 'delta': '0:00:00.045432', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 03:33:02.065463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 03:32:50.723344', 'end': '2026-02-13 03:32:50.771745', 'delta': '0:00:00.048401', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 03:33:02.065502 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.065527 | orchestrator | 2026-02-13 03:33:02.065541 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 03:33:02.065554 | orchestrator | Friday 13 February 2026 03:32:52 +0000 (0:00:00.177) 0:00:13.508 ******* 2026-02-13 03:33:02.065565 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:02.065577 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:02.065587 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:02.065598 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:02.065609 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:02.065620 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:02.065630 | orchestrator | 2026-02-13 03:33:02.065642 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 03:33:02.065653 | orchestrator | Friday 13 February 2026 03:32:53 +0000 (0:00:00.756) 0:00:14.265 ******* 2026-02-13 03:33:02.065664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:33:02.065675 | orchestrator | 2026-02-13 03:33:02.065687 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 03:33:02.065700 | orchestrator | Friday 13 February 2026 03:32:54 +0000 (0:00:00.851) 0:00:15.117 ******* 2026-02-13 03:33:02.065739 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.065753 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.065767 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.065781 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.065801 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.065820 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.065837 | orchestrator | 2026-02-13 03:33:02.065857 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 03:33:02.065877 | orchestrator | Friday 13 February 2026 03:32:54 +0000 (0:00:00.810) 0:00:15.928 ******* 2026-02-13 03:33:02.065897 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.065916 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.065935 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.065955 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.065973 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.065993 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066013 | orchestrator | 2026-02-13 03:33:02.066123 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 03:33:02.066141 | orchestrator | Friday 13 February 2026 03:32:56 +0000 (0:00:01.110) 0:00:17.039 ******* 2026-02-13 03:33:02.066160 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066179 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066196 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066215 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066233 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066268 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066289 | orchestrator | 2026-02-13 03:33:02.066333 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 03:33:02.066355 | orchestrator | Friday 13 February 2026 03:32:56 +0000 (0:00:00.576) 0:00:17.616 ******* 2026-02-13 03:33:02.066375 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066394 | orchestrator | 2026-02-13 03:33:02.066413 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 03:33:02.066425 | orchestrator | Friday 13 February 2026 03:32:56 +0000 (0:00:00.113) 0:00:17.729 ******* 2026-02-13 03:33:02.066436 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066446 | orchestrator | 2026-02-13 03:33:02.066457 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 03:33:02.066468 | orchestrator | Friday 13 February 2026 03:32:56 +0000 (0:00:00.211) 0:00:17.941 ******* 2026-02-13 03:33:02.066479 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066489 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066500 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066510 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066519 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066529 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066539 | orchestrator | 2026-02-13 03:33:02.066571 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 03:33:02.066582 | orchestrator | Friday 13 February 2026 03:32:57 +0000 (0:00:00.749) 0:00:18.691 ******* 2026-02-13 03:33:02.066591 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066601 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066610 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066619 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066629 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066638 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066648 | orchestrator | 2026-02-13 03:33:02.066658 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 03:33:02.066667 | orchestrator | Friday 13 February 2026 03:32:58 +0000 (0:00:00.609) 0:00:19.300 ******* 2026-02-13 03:33:02.066677 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066686 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066696 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066716 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066731 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066747 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066763 | orchestrator | 2026-02-13 03:33:02.066779 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 03:33:02.066796 | orchestrator | Friday 13 February 2026 03:32:59 +0000 (0:00:00.806) 0:00:20.106 ******* 2026-02-13 03:33:02.066810 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066823 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066839 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066856 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066872 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066889 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066899 | orchestrator | 2026-02-13 03:33:02.066909 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 03:33:02.066918 | orchestrator | Friday 13 February 2026 03:32:59 +0000 (0:00:00.640) 0:00:20.747 ******* 2026-02-13 03:33:02.066928 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.066938 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.066947 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.066956 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.066966 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.066980 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.066996 | orchestrator | 2026-02-13 03:33:02.067012 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 03:33:02.067028 | orchestrator | Friday 13 February 2026 03:33:00 +0000 (0:00:00.796) 0:00:21.544 ******* 2026-02-13 03:33:02.067043 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.067059 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.067076 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.067093 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.067110 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.067126 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.067143 | orchestrator | 2026-02-13 03:33:02.067159 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 03:33:02.067175 | orchestrator | Friday 13 February 2026 03:33:01 +0000 (0:00:00.586) 0:00:22.130 ******* 2026-02-13 03:33:02.067191 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.067206 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.067222 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.067238 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.067254 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.067270 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:02.067287 | orchestrator | 2026-02-13 03:33:02.067328 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 03:33:02.067346 | orchestrator | Friday 13 February 2026 03:33:01 +0000 (0:00:00.814) 0:00:22.945 ******* 2026-02-13 03:33:02.067364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.067395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.067438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188145 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188156 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.188229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.188268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.188282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.188295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.188382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.188417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355142 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355235 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:02.355248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.355283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.355299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.355382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.355405 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.529912 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.529994 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530149 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530156 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.530176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.530195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.530203 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:02.530216 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.750923 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.751026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.751071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.751275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.751299 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:02.751389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.751442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.970961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.970987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:02.971001 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:02.971014 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:02.971034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:02.971242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:33:03.203476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:03.203578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:33:03.203595 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:03.203609 | orchestrator | 2026-02-13 03:33:03.203621 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 03:33:03.203633 | orchestrator | Friday 13 February 2026 03:33:02 +0000 (0:00:01.031) 0:00:23.977 ******* 2026-02-13 03:33:03.203646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203747 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203759 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203770 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203788 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.203807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581064 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581187 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581257 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581387 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581427 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581439 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.581471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604595 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604609 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604653 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604726 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.604756 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.731992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732149 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732161 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732196 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732211 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732231 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:03.732246 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732258 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732270 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.732377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854546 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854647 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854662 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854676 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854755 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854775 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854795 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:03.854878 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.854976 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.855012 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960428 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960534 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960572 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960604 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960617 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960628 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960640 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960656 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960687 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:03.960710 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.180797 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.180921 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:04.181015 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:04.181028 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:04.181041 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181055 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181068 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181079 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181091 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181142 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181155 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181166 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181180 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:04.181213 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:33:16.180383 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.180534 | orchestrator | 2026-02-13 03:33:16.180568 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 03:33:16.180591 | orchestrator | Friday 13 February 2026 03:33:04 +0000 (0:00:01.210) 0:00:25.187 ******* 2026-02-13 03:33:16.180611 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:16.180631 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:16.180650 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:16.180669 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:16.180689 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:16.180708 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:16.180727 | orchestrator | 2026-02-13 03:33:16.180747 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 03:33:16.180767 | orchestrator | Friday 13 February 2026 03:33:05 +0000 (0:00:00.968) 0:00:26.156 ******* 2026-02-13 03:33:16.180786 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:16.180806 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:16.180825 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:16.180844 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:16.180864 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:16.180884 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:16.180905 | orchestrator | 2026-02-13 03:33:16.180925 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 03:33:16.180945 | orchestrator | Friday 13 February 2026 03:33:05 +0000 (0:00:00.829) 0:00:26.986 ******* 2026-02-13 03:33:16.180965 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.180985 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.181006 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.181025 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.181046 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.181065 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.181085 | orchestrator | 2026-02-13 03:33:16.181105 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 03:33:16.181127 | orchestrator | Friday 13 February 2026 03:33:06 +0000 (0:00:00.609) 0:00:27.596 ******* 2026-02-13 03:33:16.181147 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.181167 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.181187 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.181207 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.181227 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.181246 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.181267 | orchestrator | 2026-02-13 03:33:16.181285 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 03:33:16.181305 | orchestrator | Friday 13 February 2026 03:33:07 +0000 (0:00:00.825) 0:00:28.421 ******* 2026-02-13 03:33:16.181380 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.181400 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.181419 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.181560 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.181584 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.181604 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.181625 | orchestrator | 2026-02-13 03:33:16.181645 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 03:33:16.181665 | orchestrator | Friday 13 February 2026 03:33:08 +0000 (0:00:00.675) 0:00:29.096 ******* 2026-02-13 03:33:16.181684 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.181703 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.181723 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.181742 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.181763 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.181782 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.181799 | orchestrator | 2026-02-13 03:33:16.181818 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 03:33:16.181835 | orchestrator | Friday 13 February 2026 03:33:08 +0000 (0:00:00.851) 0:00:29.948 ******* 2026-02-13 03:33:16.181853 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-13 03:33:16.181872 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-13 03:33:16.181889 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-13 03:33:16.181907 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-13 03:33:16.181925 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-13 03:33:16.181942 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-13 03:33:16.181960 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 03:33:16.181978 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-13 03:33:16.181997 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-13 03:33:16.182015 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 03:33:16.182135 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-13 03:33:16.182154 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-13 03:33:16.182171 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-13 03:33:16.182190 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-13 03:33:16.182208 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 03:33:16.182227 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-13 03:33:16.182245 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-13 03:33:16.182281 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-13 03:33:16.182301 | orchestrator | 2026-02-13 03:33:16.182342 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 03:33:16.182362 | orchestrator | Friday 13 February 2026 03:33:10 +0000 (0:00:01.708) 0:00:31.657 ******* 2026-02-13 03:33:16.182380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 03:33:16.182398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 03:33:16.182414 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 03:33:16.182432 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.182450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-13 03:33:16.182468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-13 03:33:16.182486 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-13 03:33:16.182535 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.182554 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-13 03:33:16.182570 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-13 03:33:16.182590 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-13 03:33:16.182607 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.182625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 03:33:16.182640 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 03:33:16.182677 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 03:33:16.182694 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.182710 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-13 03:33:16.182726 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-13 03:33:16.182744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-13 03:33:16.182761 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.182779 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-13 03:33:16.182797 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-13 03:33:16.182814 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-13 03:33:16.182833 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.182849 | orchestrator | 2026-02-13 03:33:16.182865 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 03:33:16.182881 | orchestrator | Friday 13 February 2026 03:33:11 +0000 (0:00:01.081) 0:00:32.738 ******* 2026-02-13 03:33:16.182898 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:16.182914 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:16.182930 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:16.182947 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:33:16.182963 | orchestrator | 2026-02-13 03:33:16.182982 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 03:33:16.182999 | orchestrator | Friday 13 February 2026 03:33:12 +0000 (0:00:01.044) 0:00:33.782 ******* 2026-02-13 03:33:16.183015 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183031 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.183047 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.183063 | orchestrator | 2026-02-13 03:33:16.183079 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 03:33:16.183095 | orchestrator | Friday 13 February 2026 03:33:13 +0000 (0:00:00.337) 0:00:34.120 ******* 2026-02-13 03:33:16.183112 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183129 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.183145 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.183160 | orchestrator | 2026-02-13 03:33:16.183177 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 03:33:16.183193 | orchestrator | Friday 13 February 2026 03:33:13 +0000 (0:00:00.364) 0:00:34.485 ******* 2026-02-13 03:33:16.183209 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183225 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:16.183241 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:16.183257 | orchestrator | 2026-02-13 03:33:16.183274 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 03:33:16.183290 | orchestrator | Friday 13 February 2026 03:33:14 +0000 (0:00:00.593) 0:00:35.078 ******* 2026-02-13 03:33:16.183306 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:16.183365 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:16.183382 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:16.183397 | orchestrator | 2026-02-13 03:33:16.183414 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 03:33:16.183431 | orchestrator | Friday 13 February 2026 03:33:14 +0000 (0:00:00.457) 0:00:35.536 ******* 2026-02-13 03:33:16.183447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:33:16.183464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:33:16.183479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:33:16.183495 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183511 | orchestrator | 2026-02-13 03:33:16.183527 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 03:33:16.183561 | orchestrator | Friday 13 February 2026 03:33:14 +0000 (0:00:00.435) 0:00:35.971 ******* 2026-02-13 03:33:16.183577 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:33:16.183593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:33:16.183609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:33:16.183626 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183642 | orchestrator | 2026-02-13 03:33:16.183658 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 03:33:16.183675 | orchestrator | Friday 13 February 2026 03:33:15 +0000 (0:00:00.409) 0:00:36.381 ******* 2026-02-13 03:33:16.183702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:33:16.183718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:33:16.183735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:33:16.183751 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:16.183767 | orchestrator | 2026-02-13 03:33:16.183783 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 03:33:16.183797 | orchestrator | Friday 13 February 2026 03:33:15 +0000 (0:00:00.439) 0:00:36.820 ******* 2026-02-13 03:33:16.183813 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:16.183828 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:16.183844 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:16.183859 | orchestrator | 2026-02-13 03:33:16.183875 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 03:33:16.183910 | orchestrator | Friday 13 February 2026 03:33:16 +0000 (0:00:00.364) 0:00:37.185 ******* 2026-02-13 03:33:35.520954 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 03:33:35.521099 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-13 03:33:35.521125 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-13 03:33:35.521138 | orchestrator | 2026-02-13 03:33:35.521151 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 03:33:35.521163 | orchestrator | Friday 13 February 2026 03:33:17 +0000 (0:00:01.057) 0:00:38.242 ******* 2026-02-13 03:33:35.521174 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:33:35.521186 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:33:35.521197 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:33:35.521208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-13 03:33:35.521219 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 03:33:35.521230 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 03:33:35.521241 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 03:33:35.521252 | orchestrator | 2026-02-13 03:33:35.521263 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 03:33:35.521274 | orchestrator | Friday 13 February 2026 03:33:18 +0000 (0:00:00.844) 0:00:39.087 ******* 2026-02-13 03:33:35.521285 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:33:35.521296 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:33:35.521306 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:33:35.521317 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-13 03:33:35.521353 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 03:33:35.521364 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 03:33:35.521375 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 03:33:35.521385 | orchestrator | 2026-02-13 03:33:35.521396 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:33:35.521434 | orchestrator | Friday 13 February 2026 03:33:19 +0000 (0:00:01.920) 0:00:41.008 ******* 2026-02-13 03:33:35.521446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:33:35.521459 | orchestrator | 2026-02-13 03:33:35.521470 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:33:35.521480 | orchestrator | Friday 13 February 2026 03:33:21 +0000 (0:00:01.198) 0:00:42.206 ******* 2026-02-13 03:33:35.521491 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:33:35.521502 | orchestrator | 2026-02-13 03:33:35.521513 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:33:35.521524 | orchestrator | Friday 13 February 2026 03:33:22 +0000 (0:00:01.280) 0:00:43.487 ******* 2026-02-13 03:33:35.521535 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.521546 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.521556 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.521567 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:35.521578 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:35.521589 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:35.521599 | orchestrator | 2026-02-13 03:33:35.521610 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:33:35.521621 | orchestrator | Friday 13 February 2026 03:33:23 +0000 (0:00:01.271) 0:00:44.759 ******* 2026-02-13 03:33:35.521632 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.521642 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.521653 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.521664 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.521674 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.521685 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.521695 | orchestrator | 2026-02-13 03:33:35.521706 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:33:35.521717 | orchestrator | Friday 13 February 2026 03:33:24 +0000 (0:00:00.687) 0:00:45.447 ******* 2026-02-13 03:33:35.521728 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.521739 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.521749 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.521760 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.521770 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.521781 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.521792 | orchestrator | 2026-02-13 03:33:35.521817 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:33:35.521829 | orchestrator | Friday 13 February 2026 03:33:25 +0000 (0:00:00.883) 0:00:46.330 ******* 2026-02-13 03:33:35.521839 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.521850 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.521861 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.521871 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.521882 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.521892 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.521903 | orchestrator | 2026-02-13 03:33:35.521914 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:33:35.521924 | orchestrator | Friday 13 February 2026 03:33:26 +0000 (0:00:00.735) 0:00:47.066 ******* 2026-02-13 03:33:35.521935 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.521946 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.521976 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.521988 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:35.521999 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:35.522009 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:35.522088 | orchestrator | 2026-02-13 03:33:35.522100 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:33:35.522120 | orchestrator | Friday 13 February 2026 03:33:27 +0000 (0:00:01.263) 0:00:48.330 ******* 2026-02-13 03:33:35.522131 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.522142 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.522153 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.522163 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.522174 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.522185 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.522195 | orchestrator | 2026-02-13 03:33:35.522206 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:33:35.522217 | orchestrator | Friday 13 February 2026 03:33:27 +0000 (0:00:00.610) 0:00:48.940 ******* 2026-02-13 03:33:35.522461 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.522490 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.522502 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.522512 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.522523 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.522534 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.522544 | orchestrator | 2026-02-13 03:33:35.522555 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:33:35.522566 | orchestrator | Friday 13 February 2026 03:33:28 +0000 (0:00:00.822) 0:00:49.763 ******* 2026-02-13 03:33:35.522577 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.522588 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.522598 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.522609 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:35.522619 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:35.522630 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:35.522641 | orchestrator | 2026-02-13 03:33:35.522651 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:33:35.522662 | orchestrator | Friday 13 February 2026 03:33:29 +0000 (0:00:01.043) 0:00:50.806 ******* 2026-02-13 03:33:35.522673 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.522683 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.522694 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.522704 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:35.522715 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:35.522725 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:35.522736 | orchestrator | 2026-02-13 03:33:35.522747 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:33:35.522758 | orchestrator | Friday 13 February 2026 03:33:31 +0000 (0:00:01.233) 0:00:52.040 ******* 2026-02-13 03:33:35.522768 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.522779 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.522789 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.522800 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.522811 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.522822 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.522833 | orchestrator | 2026-02-13 03:33:35.522843 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:33:35.522854 | orchestrator | Friday 13 February 2026 03:33:31 +0000 (0:00:00.576) 0:00:52.616 ******* 2026-02-13 03:33:35.522865 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.522875 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.522886 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.522896 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:33:35.522907 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:33:35.522917 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:33:35.522928 | orchestrator | 2026-02-13 03:33:35.522976 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:33:35.522987 | orchestrator | Friday 13 February 2026 03:33:32 +0000 (0:00:00.800) 0:00:53.417 ******* 2026-02-13 03:33:35.522998 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.523009 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.523041 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.523052 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.523063 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.523073 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.523084 | orchestrator | 2026-02-13 03:33:35.523095 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:33:35.523106 | orchestrator | Friday 13 February 2026 03:33:32 +0000 (0:00:00.579) 0:00:53.996 ******* 2026-02-13 03:33:35.523117 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.523128 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.523138 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.523149 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.523160 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.523171 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.523181 | orchestrator | 2026-02-13 03:33:35.523192 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:33:35.523203 | orchestrator | Friday 13 February 2026 03:33:33 +0000 (0:00:00.813) 0:00:54.809 ******* 2026-02-13 03:33:35.523214 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:33:35.523225 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:33:35.523235 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:33:35.523246 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.523257 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.523276 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.523287 | orchestrator | 2026-02-13 03:33:35.523298 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:33:35.523309 | orchestrator | Friday 13 February 2026 03:33:34 +0000 (0:00:00.608) 0:00:55.417 ******* 2026-02-13 03:33:35.523319 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.523384 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:33:35.523397 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:33:35.523408 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:33:35.523418 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:33:35.523429 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:33:35.523440 | orchestrator | 2026-02-13 03:33:35.523451 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:33:35.523462 | orchestrator | Friday 13 February 2026 03:33:35 +0000 (0:00:00.827) 0:00:56.244 ******* 2026-02-13 03:33:35.523473 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:33:35.523496 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.842293 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.842459 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.842480 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.842492 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.842504 | orchestrator | 2026-02-13 03:34:49.842517 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:34:49.842529 | orchestrator | Friday 13 February 2026 03:33:35 +0000 (0:00:00.620) 0:00:56.865 ******* 2026-02-13 03:34:49.842541 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.842552 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.842562 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.842573 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:34:49.842585 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:34:49.842596 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:34:49.842607 | orchestrator | 2026-02-13 03:34:49.842618 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:34:49.842629 | orchestrator | Friday 13 February 2026 03:33:36 +0000 (0:00:00.859) 0:00:57.725 ******* 2026-02-13 03:34:49.842640 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:34:49.842651 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:34:49.842662 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:34:49.842672 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:34:49.842683 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:34:49.842693 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:34:49.842727 | orchestrator | 2026-02-13 03:34:49.842738 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:34:49.842749 | orchestrator | Friday 13 February 2026 03:33:37 +0000 (0:00:00.615) 0:00:58.340 ******* 2026-02-13 03:34:49.842760 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:34:49.842771 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:34:49.842781 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:34:49.842792 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:34:49.842802 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:34:49.842813 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:34:49.842824 | orchestrator | 2026-02-13 03:34:49.842837 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-13 03:34:49.842849 | orchestrator | Friday 13 February 2026 03:33:38 +0000 (0:00:01.252) 0:00:59.592 ******* 2026-02-13 03:34:49.842861 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:34:49.842874 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:34:49.842886 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:34:49.842898 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:34:49.842910 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:34:49.842922 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:34:49.842934 | orchestrator | 2026-02-13 03:34:49.842947 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-13 03:34:49.842959 | orchestrator | Friday 13 February 2026 03:33:40 +0000 (0:00:01.766) 0:01:01.359 ******* 2026-02-13 03:34:49.842972 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:34:49.842985 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:34:49.842997 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:34:49.843010 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:34:49.843022 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:34:49.843034 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:34:49.843046 | orchestrator | 2026-02-13 03:34:49.843059 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-13 03:34:49.843070 | orchestrator | Friday 13 February 2026 03:33:42 +0000 (0:00:02.377) 0:01:03.737 ******* 2026-02-13 03:34:49.843082 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:34:49.843094 | orchestrator | 2026-02-13 03:34:49.843105 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-13 03:34:49.843115 | orchestrator | Friday 13 February 2026 03:33:43 +0000 (0:00:01.250) 0:01:04.987 ******* 2026-02-13 03:34:49.843126 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.843137 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.843148 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.843158 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.843169 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.843179 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.843190 | orchestrator | 2026-02-13 03:34:49.843201 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-13 03:34:49.843211 | orchestrator | Friday 13 February 2026 03:33:44 +0000 (0:00:00.625) 0:01:05.613 ******* 2026-02-13 03:34:49.843222 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.843233 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.843243 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.843254 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.843264 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.843275 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.843286 | orchestrator | 2026-02-13 03:34:49.843296 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-13 03:34:49.843307 | orchestrator | Friday 13 February 2026 03:33:45 +0000 (0:00:00.854) 0:01:06.467 ******* 2026-02-13 03:34:49.843318 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843343 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843362 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843373 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843442 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843461 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843479 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843497 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 03:34:49.843516 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843559 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843572 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843584 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 03:34:49.843594 | orchestrator | 2026-02-13 03:34:49.843605 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-13 03:34:49.843616 | orchestrator | Friday 13 February 2026 03:33:46 +0000 (0:00:01.362) 0:01:07.830 ******* 2026-02-13 03:34:49.843627 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:34:49.843638 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:34:49.843649 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:34:49.843659 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:34:49.843670 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:34:49.843680 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:34:49.843691 | orchestrator | 2026-02-13 03:34:49.843702 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-13 03:34:49.843713 | orchestrator | Friday 13 February 2026 03:33:47 +0000 (0:00:01.181) 0:01:09.012 ******* 2026-02-13 03:34:49.843723 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.843734 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.843744 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.843755 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.843765 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.843776 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.843786 | orchestrator | 2026-02-13 03:34:49.843797 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-13 03:34:49.843808 | orchestrator | Friday 13 February 2026 03:33:48 +0000 (0:00:00.614) 0:01:09.627 ******* 2026-02-13 03:34:49.843818 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.843829 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.843840 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.843850 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.843860 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.843871 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.843882 | orchestrator | 2026-02-13 03:34:49.843893 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-13 03:34:49.843903 | orchestrator | Friday 13 February 2026 03:33:49 +0000 (0:00:00.851) 0:01:10.478 ******* 2026-02-13 03:34:49.843914 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.843925 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.843936 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.843946 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.843957 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:34:49.843967 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:34:49.843978 | orchestrator | 2026-02-13 03:34:49.843989 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-13 03:34:49.843999 | orchestrator | Friday 13 February 2026 03:33:50 +0000 (0:00:00.651) 0:01:11.130 ******* 2026-02-13 03:34:49.844020 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:34:49.844031 | orchestrator | 2026-02-13 03:34:49.844041 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-13 03:34:49.844052 | orchestrator | Friday 13 February 2026 03:33:51 +0000 (0:00:01.290) 0:01:12.421 ******* 2026-02-13 03:34:49.844063 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:34:49.844073 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:34:49.844084 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:34:49.844095 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:34:49.844105 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:34:49.844116 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:34:49.844126 | orchestrator | 2026-02-13 03:34:49.844137 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-13 03:34:49.844148 | orchestrator | Friday 13 February 2026 03:34:49 +0000 (0:00:57.770) 0:02:10.191 ******* 2026-02-13 03:34:49.844159 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:34:49.844170 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:34:49.844180 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:34:49.844191 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:34:49.844202 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:34:49.844212 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:34:49.844223 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:34:49.844234 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:34:49.844244 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:34:49.844255 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:34:49.844272 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:34:49.844283 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:34:49.844294 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:34:49.844304 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:34:49.844315 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:34:49.844325 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:34:49.844336 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:34:49.844347 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:34:49.844358 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:34:49.844399 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.982135 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 03:35:12.982249 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 03:35:12.982265 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 03:35:12.982277 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.982289 | orchestrator | 2026-02-13 03:35:12.982303 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-13 03:35:12.982314 | orchestrator | Friday 13 February 2026 03:34:49 +0000 (0:00:00.661) 0:02:10.852 ******* 2026-02-13 03:35:12.982325 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.982336 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.982348 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.982359 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.982370 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.982467 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.982492 | orchestrator | 2026-02-13 03:35:12.982504 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-13 03:35:12.982515 | orchestrator | Friday 13 February 2026 03:34:50 +0000 (0:00:00.829) 0:02:11.681 ******* 2026-02-13 03:35:12.982525 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.982536 | orchestrator | 2026-02-13 03:35:12.982548 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-13 03:35:12.982558 | orchestrator | Friday 13 February 2026 03:34:50 +0000 (0:00:00.155) 0:02:11.837 ******* 2026-02-13 03:35:12.982569 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.982580 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.982591 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.982602 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.982612 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.982623 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.982636 | orchestrator | 2026-02-13 03:35:12.982648 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-13 03:35:12.982661 | orchestrator | Friday 13 February 2026 03:34:51 +0000 (0:00:00.619) 0:02:12.457 ******* 2026-02-13 03:35:12.982673 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.982686 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.982698 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.982710 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.982722 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.982735 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.982747 | orchestrator | 2026-02-13 03:35:12.982759 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-13 03:35:12.982772 | orchestrator | Friday 13 February 2026 03:34:52 +0000 (0:00:00.788) 0:02:13.245 ******* 2026-02-13 03:35:12.982784 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.982796 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.982808 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.982821 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.982833 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.982845 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.982857 | orchestrator | 2026-02-13 03:35:12.982870 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-13 03:35:12.982882 | orchestrator | Friday 13 February 2026 03:34:52 +0000 (0:00:00.627) 0:02:13.873 ******* 2026-02-13 03:35:12.982895 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:12.982908 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:12.982920 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:12.982932 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:35:12.982945 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:35:12.982957 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:35:12.982969 | orchestrator | 2026-02-13 03:35:12.982982 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-13 03:35:12.982994 | orchestrator | Friday 13 February 2026 03:34:56 +0000 (0:00:03.317) 0:02:17.191 ******* 2026-02-13 03:35:12.983005 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:12.983015 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:12.983026 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:12.983036 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:35:12.983047 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:35:12.983057 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:35:12.983068 | orchestrator | 2026-02-13 03:35:12.983079 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-13 03:35:12.983090 | orchestrator | Friday 13 February 2026 03:34:56 +0000 (0:00:00.585) 0:02:17.776 ******* 2026-02-13 03:35:12.983102 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:35:12.983114 | orchestrator | 2026-02-13 03:35:12.983125 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-13 03:35:12.983144 | orchestrator | Friday 13 February 2026 03:34:58 +0000 (0:00:01.253) 0:02:19.030 ******* 2026-02-13 03:35:12.983155 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983166 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983177 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983188 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983213 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983225 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983235 | orchestrator | 2026-02-13 03:35:12.983246 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-13 03:35:12.983257 | orchestrator | Friday 13 February 2026 03:34:58 +0000 (0:00:00.861) 0:02:19.892 ******* 2026-02-13 03:35:12.983268 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983278 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983289 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983300 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983310 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983321 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983332 | orchestrator | 2026-02-13 03:35:12.983343 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-13 03:35:12.983354 | orchestrator | Friday 13 February 2026 03:34:59 +0000 (0:00:00.580) 0:02:20.472 ******* 2026-02-13 03:35:12.983365 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983409 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983422 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983433 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983443 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983454 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983465 | orchestrator | 2026-02-13 03:35:12.983476 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-13 03:35:12.983487 | orchestrator | Friday 13 February 2026 03:35:00 +0000 (0:00:00.856) 0:02:21.329 ******* 2026-02-13 03:35:12.983498 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983509 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983520 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983530 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983541 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983552 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983562 | orchestrator | 2026-02-13 03:35:12.983573 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-13 03:35:12.983584 | orchestrator | Friday 13 February 2026 03:35:00 +0000 (0:00:00.588) 0:02:21.917 ******* 2026-02-13 03:35:12.983595 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983606 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983616 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983627 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983638 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983649 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983659 | orchestrator | 2026-02-13 03:35:12.983670 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-13 03:35:12.983681 | orchestrator | Friday 13 February 2026 03:35:01 +0000 (0:00:00.839) 0:02:22.756 ******* 2026-02-13 03:35:12.983692 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983702 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983713 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983724 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983735 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983745 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983756 | orchestrator | 2026-02-13 03:35:12.983767 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-13 03:35:12.983778 | orchestrator | Friday 13 February 2026 03:35:02 +0000 (0:00:00.597) 0:02:23.354 ******* 2026-02-13 03:35:12.983796 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983807 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983818 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983829 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983840 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983851 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983862 | orchestrator | 2026-02-13 03:35:12.983873 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-13 03:35:12.983884 | orchestrator | Friday 13 February 2026 03:35:03 +0000 (0:00:00.824) 0:02:24.178 ******* 2026-02-13 03:35:12.983895 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:12.983905 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:12.983916 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:12.983927 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:12.983938 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:12.983948 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:12.983959 | orchestrator | 2026-02-13 03:35:12.983970 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-13 03:35:12.983981 | orchestrator | Friday 13 February 2026 03:35:03 +0000 (0:00:00.618) 0:02:24.796 ******* 2026-02-13 03:35:12.983992 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:12.984003 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:12.984014 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:12.984025 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:35:12.984035 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:35:12.984046 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:35:12.984057 | orchestrator | 2026-02-13 03:35:12.984068 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-13 03:35:12.984079 | orchestrator | Friday 13 February 2026 03:35:05 +0000 (0:00:01.265) 0:02:26.061 ******* 2026-02-13 03:35:12.984091 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:35:12.984103 | orchestrator | 2026-02-13 03:35:12.984114 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-13 03:35:12.984125 | orchestrator | Friday 13 February 2026 03:35:06 +0000 (0:00:01.252) 0:02:27.314 ******* 2026-02-13 03:35:12.984136 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-13 03:35:12.984147 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-13 03:35:12.984158 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984168 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-13 03:35:12.984179 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-13 03:35:12.984190 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-13 03:35:12.984200 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984216 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:12.984227 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984238 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-13 03:35:12.984249 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984260 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984270 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:12.984282 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:12.984293 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:12.984303 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-13 03:35:12.984314 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:12.984332 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:18.099208 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:18.099362 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:18.099381 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099392 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-13 03:35:18.099458 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:18.099470 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:18.099481 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099492 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099503 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099513 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-13 03:35:18.099524 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099545 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099556 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099566 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099578 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-13 03:35:18.099588 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099599 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099609 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099620 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099631 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-13 03:35:18.099652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099673 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099683 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099694 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099705 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-13 03:35:18.099715 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099726 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099737 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099747 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099758 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099769 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099779 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-13 03:35:18.099790 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099800 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099822 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099833 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099844 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 03:35:18.099854 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099865 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099884 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.099895 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099917 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 03:35:18.099927 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.099938 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099962 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.099974 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.099985 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 03:35:18.099995 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100006 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100017 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.100028 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100038 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100049 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 03:35:18.100079 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100091 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100101 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100112 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100123 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-13 03:35:18.100134 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 03:35:18.100145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100156 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-13 03:35:18.100174 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100192 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-13 03:35:18.100211 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-13 03:35:18.100230 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-13 03:35:18.100249 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 03:35:18.100263 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-13 03:35:18.100274 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-13 03:35:18.100285 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-13 03:35:18.100296 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-13 03:35:18.100306 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-13 03:35:18.100317 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-13 03:35:18.100327 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-13 03:35:18.100338 | orchestrator | 2026-02-13 03:35:18.100350 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-13 03:35:18.100361 | orchestrator | Friday 13 February 2026 03:35:12 +0000 (0:00:06.664) 0:02:33.978 ******* 2026-02-13 03:35:18.100372 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:18.100383 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:18.100419 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:18.100432 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:35:18.100453 | orchestrator | 2026-02-13 03:35:18.100465 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-13 03:35:18.100475 | orchestrator | Friday 13 February 2026 03:35:13 +0000 (0:00:01.001) 0:02:34.980 ******* 2026-02-13 03:35:18.100486 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100498 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100509 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100519 | orchestrator | 2026-02-13 03:35:18.100530 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-13 03:35:18.100541 | orchestrator | Friday 13 February 2026 03:35:14 +0000 (0:00:00.736) 0:02:35.716 ******* 2026-02-13 03:35:18.100552 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100562 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100573 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:18.100584 | orchestrator | 2026-02-13 03:35:18.100594 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-13 03:35:18.100605 | orchestrator | Friday 13 February 2026 03:35:15 +0000 (0:00:01.188) 0:02:36.904 ******* 2026-02-13 03:35:18.100616 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:18.100627 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:18.100637 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:18.100648 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:18.100659 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:18.100669 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:18.100680 | orchestrator | 2026-02-13 03:35:18.100691 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-13 03:35:18.100707 | orchestrator | Friday 13 February 2026 03:35:16 +0000 (0:00:00.805) 0:02:37.709 ******* 2026-02-13 03:35:18.100718 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:18.100729 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:18.100740 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:18.100750 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:18.100761 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:18.100771 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:18.100782 | orchestrator | 2026-02-13 03:35:18.100793 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-13 03:35:18.100803 | orchestrator | Friday 13 February 2026 03:35:17 +0000 (0:00:00.592) 0:02:38.302 ******* 2026-02-13 03:35:18.100814 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:18.100825 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:18.100835 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:18.100846 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:18.100857 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:18.100867 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:18.100878 | orchestrator | 2026-02-13 03:35:18.100897 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-13 03:35:30.861711 | orchestrator | Friday 13 February 2026 03:35:18 +0000 (0:00:00.804) 0:02:39.106 ******* 2026-02-13 03:35:30.861862 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.861883 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.861895 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.861907 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.861918 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.861929 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.861966 | orchestrator | 2026-02-13 03:35:30.861980 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-13 03:35:30.861991 | orchestrator | Friday 13 February 2026 03:35:18 +0000 (0:00:00.605) 0:02:39.712 ******* 2026-02-13 03:35:30.862002 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.862013 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.862099 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.862111 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862122 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862132 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862143 | orchestrator | 2026-02-13 03:35:30.862154 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-13 03:35:30.862167 | orchestrator | Friday 13 February 2026 03:35:19 +0000 (0:00:00.792) 0:02:40.505 ******* 2026-02-13 03:35:30.862178 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.862188 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.862199 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.862212 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862225 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862237 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862250 | orchestrator | 2026-02-13 03:35:30.862262 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-13 03:35:30.862276 | orchestrator | Friday 13 February 2026 03:35:20 +0000 (0:00:00.574) 0:02:41.080 ******* 2026-02-13 03:35:30.862288 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.862301 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.862313 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.862325 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862338 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862350 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862362 | orchestrator | 2026-02-13 03:35:30.862375 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-13 03:35:30.862388 | orchestrator | Friday 13 February 2026 03:35:20 +0000 (0:00:00.824) 0:02:41.904 ******* 2026-02-13 03:35:30.862400 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.862456 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.862469 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.862481 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862491 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862502 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862513 | orchestrator | 2026-02-13 03:35:30.862524 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-13 03:35:30.862535 | orchestrator | Friday 13 February 2026 03:35:21 +0000 (0:00:00.573) 0:02:42.478 ******* 2026-02-13 03:35:30.862546 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862557 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862568 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862579 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:30.862591 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:30.862601 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:30.862612 | orchestrator | 2026-02-13 03:35:30.862623 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-13 03:35:30.862634 | orchestrator | Friday 13 February 2026 03:35:24 +0000 (0:00:02.839) 0:02:45.317 ******* 2026-02-13 03:35:30.862645 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:30.862655 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:30.862666 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:30.862676 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862687 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862698 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862708 | orchestrator | 2026-02-13 03:35:30.862719 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-13 03:35:30.862740 | orchestrator | Friday 13 February 2026 03:35:24 +0000 (0:00:00.601) 0:02:45.919 ******* 2026-02-13 03:35:30.862751 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:30.862762 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:30.862772 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:30.862783 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862794 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862805 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862815 | orchestrator | 2026-02-13 03:35:30.862826 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-13 03:35:30.862837 | orchestrator | Friday 13 February 2026 03:35:25 +0000 (0:00:00.847) 0:02:46.766 ******* 2026-02-13 03:35:30.862848 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.862858 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.862883 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.862894 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.862905 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.862915 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.862927 | orchestrator | 2026-02-13 03:35:30.862938 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-13 03:35:30.862949 | orchestrator | Friday 13 February 2026 03:35:26 +0000 (0:00:00.594) 0:02:47.360 ******* 2026-02-13 03:35:30.862961 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:30.862973 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:30.862984 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-13 03:35:30.862995 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863026 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863037 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863048 | orchestrator | 2026-02-13 03:35:30.863059 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-13 03:35:30.863070 | orchestrator | Friday 13 February 2026 03:35:27 +0000 (0:00:00.837) 0:02:48.198 ******* 2026-02-13 03:35:30.863083 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-13 03:35:30.863098 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-13 03:35:30.863110 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.863121 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-13 03:35:30.863133 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-13 03:35:30.863143 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.863154 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-13 03:35:30.863173 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-13 03:35:30.863184 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.863194 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863205 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863216 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863227 | orchestrator | 2026-02-13 03:35:30.863238 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-13 03:35:30.863249 | orchestrator | Friday 13 February 2026 03:35:27 +0000 (0:00:00.615) 0:02:48.814 ******* 2026-02-13 03:35:30.863260 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.863271 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.863281 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.863292 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863303 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863314 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863324 | orchestrator | 2026-02-13 03:35:30.863335 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-13 03:35:30.863346 | orchestrator | Friday 13 February 2026 03:35:28 +0000 (0:00:00.868) 0:02:49.683 ******* 2026-02-13 03:35:30.863357 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.863368 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.863378 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.863389 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863399 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863429 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863440 | orchestrator | 2026-02-13 03:35:30.863451 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 03:35:30.863462 | orchestrator | Friday 13 February 2026 03:35:29 +0000 (0:00:00.581) 0:02:50.265 ******* 2026-02-13 03:35:30.863478 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.863489 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.863500 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.863511 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863521 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863532 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863542 | orchestrator | 2026-02-13 03:35:30.863553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 03:35:30.863564 | orchestrator | Friday 13 February 2026 03:35:30 +0000 (0:00:00.825) 0:02:51.090 ******* 2026-02-13 03:35:30.863575 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:30.863585 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:30.863596 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:30.863607 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:30.863617 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:30.863628 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:30.863639 | orchestrator | 2026-02-13 03:35:30.863650 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 03:35:30.863668 | orchestrator | Friday 13 February 2026 03:35:30 +0000 (0:00:00.778) 0:02:51.869 ******* 2026-02-13 03:35:48.063851 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.064027 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:48.064056 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:48.064077 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.064095 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:48.064114 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:48.064168 | orchestrator | 2026-02-13 03:35:48.064191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 03:35:48.064212 | orchestrator | Friday 13 February 2026 03:35:31 +0000 (0:00:00.594) 0:02:52.463 ******* 2026-02-13 03:35:48.064232 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:48.064251 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:48.064270 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:48.064289 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.064308 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:48.064326 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:48.064343 | orchestrator | 2026-02-13 03:35:48.064364 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 03:35:48.064384 | orchestrator | Friday 13 February 2026 03:35:32 +0000 (0:00:00.832) 0:02:53.296 ******* 2026-02-13 03:35:48.064404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:35:48.064453 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:35:48.064473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:35:48.064493 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.064512 | orchestrator | 2026-02-13 03:35:48.064532 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 03:35:48.064551 | orchestrator | Friday 13 February 2026 03:35:32 +0000 (0:00:00.412) 0:02:53.708 ******* 2026-02-13 03:35:48.064571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:35:48.064591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:35:48.064610 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:35:48.064629 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.064648 | orchestrator | 2026-02-13 03:35:48.064665 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 03:35:48.064683 | orchestrator | Friday 13 February 2026 03:35:33 +0000 (0:00:00.404) 0:02:54.113 ******* 2026-02-13 03:35:48.064702 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:35:48.064721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:35:48.064739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:35:48.064757 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.064775 | orchestrator | 2026-02-13 03:35:48.064794 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 03:35:48.064812 | orchestrator | Friday 13 February 2026 03:35:33 +0000 (0:00:00.418) 0:02:54.531 ******* 2026-02-13 03:35:48.064830 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:35:48.064849 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:35:48.064866 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:35:48.064883 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.064900 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:48.064916 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:48.064934 | orchestrator | 2026-02-13 03:35:48.064951 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 03:35:48.064970 | orchestrator | Friday 13 February 2026 03:35:34 +0000 (0:00:00.611) 0:02:55.143 ******* 2026-02-13 03:35:48.064988 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 03:35:48.065006 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-13 03:35:48.065024 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-13 03:35:48.065042 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-13 03:35:48.065060 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.065100 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-13 03:35:48.065119 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:48.065138 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-13 03:35:48.065155 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:48.065186 | orchestrator | 2026-02-13 03:35:48.065198 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-13 03:35:48.065221 | orchestrator | Friday 13 February 2026 03:35:35 +0000 (0:00:01.820) 0:02:56.964 ******* 2026-02-13 03:35:48.065232 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:35:48.065243 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:35:48.065254 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:35:48.065265 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:35:48.065276 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:35:48.065286 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:35:48.065297 | orchestrator | 2026-02-13 03:35:48.065308 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:35:48.065319 | orchestrator | Friday 13 February 2026 03:35:38 +0000 (0:00:02.651) 0:02:59.615 ******* 2026-02-13 03:35:48.065330 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:35:48.065357 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:35:48.065369 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:35:48.065380 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:35:48.065391 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:35:48.065401 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:35:48.065450 | orchestrator | 2026-02-13 03:35:48.065469 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-13 03:35:48.065487 | orchestrator | Friday 13 February 2026 03:35:39 +0000 (0:00:00.989) 0:03:00.605 ******* 2026-02-13 03:35:48.065505 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.065521 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:48.065539 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:48.065558 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:35:48.065576 | orchestrator | 2026-02-13 03:35:48.065594 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-13 03:35:48.065614 | orchestrator | Friday 13 February 2026 03:35:40 +0000 (0:00:01.061) 0:03:01.666 ******* 2026-02-13 03:35:48.065633 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:35:48.065681 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:35:48.065703 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:35:48.065721 | orchestrator | 2026-02-13 03:35:48.065739 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-13 03:35:48.065758 | orchestrator | Friday 13 February 2026 03:35:40 +0000 (0:00:00.345) 0:03:02.012 ******* 2026-02-13 03:35:48.065777 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:35:48.065796 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:35:48.065814 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:35:48.065829 | orchestrator | 2026-02-13 03:35:48.065840 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-13 03:35:48.065851 | orchestrator | Friday 13 February 2026 03:35:42 +0000 (0:00:01.557) 0:03:03.570 ******* 2026-02-13 03:35:48.065861 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 03:35:48.065872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 03:35:48.065883 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 03:35:48.065894 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.065905 | orchestrator | 2026-02-13 03:35:48.065916 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-13 03:35:48.065926 | orchestrator | Friday 13 February 2026 03:35:43 +0000 (0:00:00.631) 0:03:04.201 ******* 2026-02-13 03:35:48.065937 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:35:48.065948 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:35:48.065959 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:35:48.065969 | orchestrator | 2026-02-13 03:35:48.065980 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-13 03:35:48.065991 | orchestrator | Friday 13 February 2026 03:35:43 +0000 (0:00:00.344) 0:03:04.546 ******* 2026-02-13 03:35:48.066002 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:35:48.066012 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:35:48.066096 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:35:48.066119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:35:48.066130 | orchestrator | 2026-02-13 03:35:48.066141 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-13 03:35:48.066152 | orchestrator | Friday 13 February 2026 03:35:44 +0000 (0:00:01.056) 0:03:05.603 ******* 2026-02-13 03:35:48.066162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:35:48.066173 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:35:48.066220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:35:48.066232 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066243 | orchestrator | 2026-02-13 03:35:48.066254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-13 03:35:48.066265 | orchestrator | Friday 13 February 2026 03:35:44 +0000 (0:00:00.419) 0:03:06.022 ******* 2026-02-13 03:35:48.066276 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066286 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:48.066297 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:48.066308 | orchestrator | 2026-02-13 03:35:48.066318 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-13 03:35:48.066329 | orchestrator | Friday 13 February 2026 03:35:45 +0000 (0:00:00.325) 0:03:06.348 ******* 2026-02-13 03:35:48.066340 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066351 | orchestrator | 2026-02-13 03:35:48.066362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-13 03:35:48.066373 | orchestrator | Friday 13 February 2026 03:35:45 +0000 (0:00:00.222) 0:03:06.571 ******* 2026-02-13 03:35:48.066383 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066394 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:35:48.066405 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:35:48.066479 | orchestrator | 2026-02-13 03:35:48.066499 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-13 03:35:48.066519 | orchestrator | Friday 13 February 2026 03:35:45 +0000 (0:00:00.331) 0:03:06.903 ******* 2026-02-13 03:35:48.066530 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066541 | orchestrator | 2026-02-13 03:35:48.066552 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-13 03:35:48.066562 | orchestrator | Friday 13 February 2026 03:35:46 +0000 (0:00:00.666) 0:03:07.569 ******* 2026-02-13 03:35:48.066573 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066586 | orchestrator | 2026-02-13 03:35:48.066605 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-13 03:35:48.066622 | orchestrator | Friday 13 February 2026 03:35:46 +0000 (0:00:00.263) 0:03:07.833 ******* 2026-02-13 03:35:48.066640 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066658 | orchestrator | 2026-02-13 03:35:48.066676 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-13 03:35:48.066692 | orchestrator | Friday 13 February 2026 03:35:46 +0000 (0:00:00.143) 0:03:07.977 ******* 2026-02-13 03:35:48.066722 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066738 | orchestrator | 2026-02-13 03:35:48.066754 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-13 03:35:48.066770 | orchestrator | Friday 13 February 2026 03:35:47 +0000 (0:00:00.229) 0:03:08.206 ******* 2026-02-13 03:35:48.066787 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066805 | orchestrator | 2026-02-13 03:35:48.066823 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-13 03:35:48.066841 | orchestrator | Friday 13 February 2026 03:35:47 +0000 (0:00:00.240) 0:03:08.446 ******* 2026-02-13 03:35:48.066857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:35:48.066876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:35:48.066894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:35:48.066927 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:35:48.066946 | orchestrator | 2026-02-13 03:35:48.066965 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-13 03:35:48.066983 | orchestrator | Friday 13 February 2026 03:35:47 +0000 (0:00:00.437) 0:03:08.884 ******* 2026-02-13 03:35:48.067015 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.516882 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:36:06.516983 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:36:06.516995 | orchestrator | 2026-02-13 03:36:06.517007 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-13 03:36:06.517017 | orchestrator | Friday 13 February 2026 03:35:48 +0000 (0:00:00.322) 0:03:09.206 ******* 2026-02-13 03:36:06.517026 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517035 | orchestrator | 2026-02-13 03:36:06.517044 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-13 03:36:06.517053 | orchestrator | Friday 13 February 2026 03:35:48 +0000 (0:00:00.229) 0:03:09.436 ******* 2026-02-13 03:36:06.517062 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517070 | orchestrator | 2026-02-13 03:36:06.517079 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-13 03:36:06.517088 | orchestrator | Friday 13 February 2026 03:35:48 +0000 (0:00:00.227) 0:03:09.664 ******* 2026-02-13 03:36:06.517097 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.517106 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.517114 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.517124 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:36:06.517133 | orchestrator | 2026-02-13 03:36:06.517141 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-13 03:36:06.517150 | orchestrator | Friday 13 February 2026 03:35:49 +0000 (0:00:01.092) 0:03:10.757 ******* 2026-02-13 03:36:06.517159 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:36:06.517169 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:36:06.517178 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:36:06.517187 | orchestrator | 2026-02-13 03:36:06.517196 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-13 03:36:06.517205 | orchestrator | Friday 13 February 2026 03:35:50 +0000 (0:00:00.320) 0:03:11.077 ******* 2026-02-13 03:36:06.517214 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:36:06.517223 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:36:06.517231 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:36:06.517240 | orchestrator | 2026-02-13 03:36:06.517249 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-13 03:36:06.517258 | orchestrator | Friday 13 February 2026 03:35:51 +0000 (0:00:01.496) 0:03:12.574 ******* 2026-02-13 03:36:06.517266 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:36:06.517276 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:36:06.517284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:36:06.517293 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517302 | orchestrator | 2026-02-13 03:36:06.517311 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-13 03:36:06.517319 | orchestrator | Friday 13 February 2026 03:35:52 +0000 (0:00:00.673) 0:03:13.248 ******* 2026-02-13 03:36:06.517328 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:36:06.517337 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:36:06.517346 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:36:06.517354 | orchestrator | 2026-02-13 03:36:06.517363 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-13 03:36:06.517372 | orchestrator | Friday 13 February 2026 03:35:52 +0000 (0:00:00.320) 0:03:13.569 ******* 2026-02-13 03:36:06.517381 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.517390 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.517398 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.517461 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:36:06.517473 | orchestrator | 2026-02-13 03:36:06.517483 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-13 03:36:06.517493 | orchestrator | Friday 13 February 2026 03:35:53 +0000 (0:00:01.034) 0:03:14.603 ******* 2026-02-13 03:36:06.517503 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:36:06.517513 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:36:06.517523 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:36:06.517532 | orchestrator | 2026-02-13 03:36:06.517542 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-13 03:36:06.517553 | orchestrator | Friday 13 February 2026 03:35:53 +0000 (0:00:00.326) 0:03:14.929 ******* 2026-02-13 03:36:06.517564 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:36:06.517574 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:36:06.517584 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:36:06.517594 | orchestrator | 2026-02-13 03:36:06.517605 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-13 03:36:06.517614 | orchestrator | Friday 13 February 2026 03:35:55 +0000 (0:00:01.214) 0:03:16.144 ******* 2026-02-13 03:36:06.517625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:36:06.517635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:36:06.517659 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:36:06.517669 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517679 | orchestrator | 2026-02-13 03:36:06.517689 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-13 03:36:06.517700 | orchestrator | Friday 13 February 2026 03:35:55 +0000 (0:00:00.848) 0:03:16.992 ******* 2026-02-13 03:36:06.517710 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:36:06.517720 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:36:06.517730 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:36:06.517740 | orchestrator | 2026-02-13 03:36:06.517751 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-13 03:36:06.517761 | orchestrator | Friday 13 February 2026 03:35:56 +0000 (0:00:00.548) 0:03:17.541 ******* 2026-02-13 03:36:06.517771 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517781 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:36:06.517791 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:36:06.517800 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.517811 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.517822 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.517832 | orchestrator | 2026-02-13 03:36:06.517858 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-13 03:36:06.517869 | orchestrator | Friday 13 February 2026 03:35:57 +0000 (0:00:00.617) 0:03:18.159 ******* 2026-02-13 03:36:06.517877 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:36:06.517886 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:36:06.517895 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:36:06.517904 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:36:06.517913 | orchestrator | 2026-02-13 03:36:06.517921 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-13 03:36:06.517930 | orchestrator | Friday 13 February 2026 03:35:58 +0000 (0:00:01.121) 0:03:19.281 ******* 2026-02-13 03:36:06.517939 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:06.517948 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:06.517957 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:06.517965 | orchestrator | 2026-02-13 03:36:06.517974 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-13 03:36:06.517983 | orchestrator | Friday 13 February 2026 03:35:58 +0000 (0:00:00.342) 0:03:19.623 ******* 2026-02-13 03:36:06.517992 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:06.518007 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:06.518087 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:06.518104 | orchestrator | 2026-02-13 03:36:06.518118 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-13 03:36:06.518132 | orchestrator | Friday 13 February 2026 03:35:59 +0000 (0:00:01.201) 0:03:20.824 ******* 2026-02-13 03:36:06.518146 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 03:36:06.518160 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 03:36:06.518175 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 03:36:06.518190 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518218 | orchestrator | 2026-02-13 03:36:06.518227 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-13 03:36:06.518236 | orchestrator | Friday 13 February 2026 03:36:00 +0000 (0:00:01.074) 0:03:21.898 ******* 2026-02-13 03:36:06.518245 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:06.518253 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:06.518262 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:06.518271 | orchestrator | 2026-02-13 03:36:06.518279 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-13 03:36:06.518288 | orchestrator | 2026-02-13 03:36:06.518297 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:36:06.518306 | orchestrator | Friday 13 February 2026 03:36:01 +0000 (0:00:00.589) 0:03:22.488 ******* 2026-02-13 03:36:06.518315 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:36:06.518325 | orchestrator | 2026-02-13 03:36:06.518334 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:36:06.518343 | orchestrator | Friday 13 February 2026 03:36:02 +0000 (0:00:00.766) 0:03:23.255 ******* 2026-02-13 03:36:06.518352 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:36:06.518361 | orchestrator | 2026-02-13 03:36:06.518369 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:36:06.518378 | orchestrator | Friday 13 February 2026 03:36:02 +0000 (0:00:00.599) 0:03:23.854 ******* 2026-02-13 03:36:06.518387 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:06.518395 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:06.518404 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:06.518413 | orchestrator | 2026-02-13 03:36:06.518422 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:36:06.518483 | orchestrator | Friday 13 February 2026 03:36:03 +0000 (0:00:00.738) 0:03:24.593 ******* 2026-02-13 03:36:06.518492 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518501 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.518510 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.518519 | orchestrator | 2026-02-13 03:36:06.518527 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:36:06.518536 | orchestrator | Friday 13 February 2026 03:36:04 +0000 (0:00:00.581) 0:03:25.174 ******* 2026-02-13 03:36:06.518545 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518554 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.518563 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.518571 | orchestrator | 2026-02-13 03:36:06.518580 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:36:06.518589 | orchestrator | Friday 13 February 2026 03:36:04 +0000 (0:00:00.349) 0:03:25.524 ******* 2026-02-13 03:36:06.518598 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518606 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.518622 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.518631 | orchestrator | 2026-02-13 03:36:06.518639 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:36:06.518648 | orchestrator | Friday 13 February 2026 03:36:04 +0000 (0:00:00.316) 0:03:25.841 ******* 2026-02-13 03:36:06.518672 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:06.518681 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:06.518689 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:06.518698 | orchestrator | 2026-02-13 03:36:06.518707 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:36:06.518715 | orchestrator | Friday 13 February 2026 03:36:05 +0000 (0:00:00.763) 0:03:26.604 ******* 2026-02-13 03:36:06.518724 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518733 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.518742 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:06.518750 | orchestrator | 2026-02-13 03:36:06.518759 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:36:06.518768 | orchestrator | Friday 13 February 2026 03:36:06 +0000 (0:00:00.594) 0:03:27.198 ******* 2026-02-13 03:36:06.518777 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:06.518785 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:06.518803 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026152 | orchestrator | 2026-02-13 03:36:28.026250 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:36:28.026263 | orchestrator | Friday 13 February 2026 03:36:06 +0000 (0:00:00.327) 0:03:27.526 ******* 2026-02-13 03:36:28.026271 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026280 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026287 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026294 | orchestrator | 2026-02-13 03:36:28.026302 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:36:28.026310 | orchestrator | Friday 13 February 2026 03:36:07 +0000 (0:00:00.778) 0:03:28.305 ******* 2026-02-13 03:36:28.026317 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026324 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026332 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026339 | orchestrator | 2026-02-13 03:36:28.026346 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:36:28.026353 | orchestrator | Friday 13 February 2026 03:36:08 +0000 (0:00:00.811) 0:03:29.116 ******* 2026-02-13 03:36:28.026361 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026370 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026377 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026384 | orchestrator | 2026-02-13 03:36:28.026391 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:36:28.026399 | orchestrator | Friday 13 February 2026 03:36:08 +0000 (0:00:00.539) 0:03:29.656 ******* 2026-02-13 03:36:28.026406 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026415 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026422 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026429 | orchestrator | 2026-02-13 03:36:28.026437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:36:28.026489 | orchestrator | Friday 13 February 2026 03:36:08 +0000 (0:00:00.341) 0:03:29.998 ******* 2026-02-13 03:36:28.026497 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026504 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026512 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026519 | orchestrator | 2026-02-13 03:36:28.026526 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:36:28.026533 | orchestrator | Friday 13 February 2026 03:36:09 +0000 (0:00:00.317) 0:03:30.316 ******* 2026-02-13 03:36:28.026541 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026548 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026555 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026563 | orchestrator | 2026-02-13 03:36:28.026570 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:36:28.026577 | orchestrator | Friday 13 February 2026 03:36:09 +0000 (0:00:00.327) 0:03:30.644 ******* 2026-02-13 03:36:28.026585 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026612 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026620 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026627 | orchestrator | 2026-02-13 03:36:28.026634 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:36:28.026642 | orchestrator | Friday 13 February 2026 03:36:10 +0000 (0:00:00.546) 0:03:31.190 ******* 2026-02-13 03:36:28.026649 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026656 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026663 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026670 | orchestrator | 2026-02-13 03:36:28.026678 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:36:28.026685 | orchestrator | Friday 13 February 2026 03:36:10 +0000 (0:00:00.352) 0:03:31.543 ******* 2026-02-13 03:36:28.026692 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026699 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:36:28.026707 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:36:28.026714 | orchestrator | 2026-02-13 03:36:28.026721 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:36:28.026728 | orchestrator | Friday 13 February 2026 03:36:10 +0000 (0:00:00.301) 0:03:31.844 ******* 2026-02-13 03:36:28.026736 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026743 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026750 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026757 | orchestrator | 2026-02-13 03:36:28.026764 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:36:28.026772 | orchestrator | Friday 13 February 2026 03:36:11 +0000 (0:00:00.332) 0:03:32.177 ******* 2026-02-13 03:36:28.026779 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026786 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026793 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026800 | orchestrator | 2026-02-13 03:36:28.026807 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:36:28.026814 | orchestrator | Friday 13 February 2026 03:36:11 +0000 (0:00:00.559) 0:03:32.736 ******* 2026-02-13 03:36:28.026821 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026828 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026835 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026842 | orchestrator | 2026-02-13 03:36:28.026862 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-13 03:36:28.026870 | orchestrator | Friday 13 February 2026 03:36:12 +0000 (0:00:00.576) 0:03:33.312 ******* 2026-02-13 03:36:28.026877 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.026884 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.026891 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.026898 | orchestrator | 2026-02-13 03:36:28.026905 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-13 03:36:28.026913 | orchestrator | Friday 13 February 2026 03:36:12 +0000 (0:00:00.335) 0:03:33.648 ******* 2026-02-13 03:36:28.026920 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:36:28.026928 | orchestrator | 2026-02-13 03:36:28.026935 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-13 03:36:28.026942 | orchestrator | Friday 13 February 2026 03:36:13 +0000 (0:00:00.857) 0:03:34.506 ******* 2026-02-13 03:36:28.026949 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:36:28.026956 | orchestrator | 2026-02-13 03:36:28.026964 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-13 03:36:28.026985 | orchestrator | Friday 13 February 2026 03:36:13 +0000 (0:00:00.160) 0:03:34.666 ******* 2026-02-13 03:36:28.026992 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 03:36:28.027000 | orchestrator | 2026-02-13 03:36:28.027007 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-13 03:36:28.027014 | orchestrator | Friday 13 February 2026 03:36:14 +0000 (0:00:01.024) 0:03:35.691 ******* 2026-02-13 03:36:28.027027 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027035 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.027042 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.027049 | orchestrator | 2026-02-13 03:36:28.027059 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-13 03:36:28.027071 | orchestrator | Friday 13 February 2026 03:36:15 +0000 (0:00:00.353) 0:03:36.044 ******* 2026-02-13 03:36:28.027083 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027095 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.027106 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.027117 | orchestrator | 2026-02-13 03:36:28.027128 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-13 03:36:28.027139 | orchestrator | Friday 13 February 2026 03:36:15 +0000 (0:00:00.577) 0:03:36.622 ******* 2026-02-13 03:36:28.027150 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027161 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:28.027173 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:28.027185 | orchestrator | 2026-02-13 03:36:28.027197 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-13 03:36:28.027209 | orchestrator | Friday 13 February 2026 03:36:16 +0000 (0:00:01.238) 0:03:37.861 ******* 2026-02-13 03:36:28.027221 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027233 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:28.027245 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:28.027257 | orchestrator | 2026-02-13 03:36:28.027270 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-13 03:36:28.027278 | orchestrator | Friday 13 February 2026 03:36:17 +0000 (0:00:00.805) 0:03:38.667 ******* 2026-02-13 03:36:28.027285 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027292 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:28.027299 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:28.027306 | orchestrator | 2026-02-13 03:36:28.027314 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-13 03:36:28.027321 | orchestrator | Friday 13 February 2026 03:36:18 +0000 (0:00:00.731) 0:03:39.398 ******* 2026-02-13 03:36:28.027328 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027335 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.027343 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.027350 | orchestrator | 2026-02-13 03:36:28.027357 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-13 03:36:28.027364 | orchestrator | Friday 13 February 2026 03:36:19 +0000 (0:00:00.960) 0:03:40.359 ******* 2026-02-13 03:36:28.027371 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027379 | orchestrator | 2026-02-13 03:36:28.027386 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-13 03:36:28.027393 | orchestrator | Friday 13 February 2026 03:36:20 +0000 (0:00:01.264) 0:03:41.623 ******* 2026-02-13 03:36:28.027403 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027415 | orchestrator | 2026-02-13 03:36:28.027426 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-13 03:36:28.027437 | orchestrator | Friday 13 February 2026 03:36:21 +0000 (0:00:00.690) 0:03:42.314 ******* 2026-02-13 03:36:28.027469 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:36:28.027480 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:36:28.027491 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:36:28.027503 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:36:28.027515 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-13 03:36:28.027527 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:36:28.027539 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:36:28.027551 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-13 03:36:28.027563 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:36:28.027580 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-13 03:36:28.027588 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-13 03:36:28.027595 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-13 03:36:28.027602 | orchestrator | 2026-02-13 03:36:28.027609 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-13 03:36:28.027617 | orchestrator | Friday 13 February 2026 03:36:24 +0000 (0:00:03.070) 0:03:45.385 ******* 2026-02-13 03:36:28.027624 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027631 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:28.027644 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:28.027651 | orchestrator | 2026-02-13 03:36:28.027659 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-13 03:36:28.027666 | orchestrator | Friday 13 February 2026 03:36:25 +0000 (0:00:01.243) 0:03:46.629 ******* 2026-02-13 03:36:28.027673 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027680 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.027687 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.027695 | orchestrator | 2026-02-13 03:36:28.027702 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-13 03:36:28.027709 | orchestrator | Friday 13 February 2026 03:36:26 +0000 (0:00:00.567) 0:03:47.196 ******* 2026-02-13 03:36:28.027716 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:36:28.027723 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:36:28.027731 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:36:28.027738 | orchestrator | 2026-02-13 03:36:28.027745 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-13 03:36:28.027752 | orchestrator | Friday 13 February 2026 03:36:26 +0000 (0:00:00.328) 0:03:47.525 ******* 2026-02-13 03:36:28.027759 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:36:28.027767 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:36:28.027774 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:36:28.027781 | orchestrator | 2026-02-13 03:36:28.027796 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-13 03:37:28.809998 | orchestrator | Friday 13 February 2026 03:36:28 +0000 (0:00:01.505) 0:03:49.030 ******* 2026-02-13 03:37:28.810174 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:28.810191 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:28.810204 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:28.810215 | orchestrator | 2026-02-13 03:37:28.810228 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-13 03:37:28.810240 | orchestrator | Friday 13 February 2026 03:36:29 +0000 (0:00:01.357) 0:03:50.387 ******* 2026-02-13 03:37:28.810251 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.810262 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.810272 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.810283 | orchestrator | 2026-02-13 03:37:28.810294 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-13 03:37:28.810305 | orchestrator | Friday 13 February 2026 03:36:29 +0000 (0:00:00.557) 0:03:50.945 ******* 2026-02-13 03:37:28.810317 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:28.810328 | orchestrator | 2026-02-13 03:37:28.810340 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-13 03:37:28.810351 | orchestrator | Friday 13 February 2026 03:36:30 +0000 (0:00:00.587) 0:03:51.533 ******* 2026-02-13 03:37:28.810362 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.810373 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.810384 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.810395 | orchestrator | 2026-02-13 03:37:28.810422 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-13 03:37:28.810434 | orchestrator | Friday 13 February 2026 03:36:30 +0000 (0:00:00.312) 0:03:51.846 ******* 2026-02-13 03:37:28.810455 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.810527 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.810542 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.810555 | orchestrator | 2026-02-13 03:37:28.810567 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-13 03:37:28.810581 | orchestrator | Friday 13 February 2026 03:36:31 +0000 (0:00:00.536) 0:03:52.382 ******* 2026-02-13 03:37:28.810593 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:28.810607 | orchestrator | 2026-02-13 03:37:28.810619 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-13 03:37:28.810632 | orchestrator | Friday 13 February 2026 03:36:31 +0000 (0:00:00.541) 0:03:52.923 ******* 2026-02-13 03:37:28.810645 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:28.810657 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:28.810670 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:28.810681 | orchestrator | 2026-02-13 03:37:28.810694 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-13 03:37:28.810707 | orchestrator | Friday 13 February 2026 03:36:33 +0000 (0:00:01.788) 0:03:54.712 ******* 2026-02-13 03:37:28.810720 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:28.810732 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:28.810744 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:28.810755 | orchestrator | 2026-02-13 03:37:28.810765 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-13 03:37:28.810776 | orchestrator | Friday 13 February 2026 03:36:35 +0000 (0:00:01.433) 0:03:56.145 ******* 2026-02-13 03:37:28.810787 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:28.810797 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:28.810808 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:28.810819 | orchestrator | 2026-02-13 03:37:28.810829 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-13 03:37:28.810840 | orchestrator | Friday 13 February 2026 03:36:36 +0000 (0:00:01.778) 0:03:57.923 ******* 2026-02-13 03:37:28.810851 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:28.810861 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:28.810872 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:28.810882 | orchestrator | 2026-02-13 03:37:28.810893 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-13 03:37:28.810904 | orchestrator | Friday 13 February 2026 03:36:38 +0000 (0:00:01.964) 0:03:59.887 ******* 2026-02-13 03:37:28.810915 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:28.810925 | orchestrator | 2026-02-13 03:37:28.810936 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-13 03:37:28.810947 | orchestrator | Friday 13 February 2026 03:36:39 +0000 (0:00:00.815) 0:04:00.702 ******* 2026-02-13 03:37:28.810971 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-13 03:37:28.810982 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:28.810994 | orchestrator | 2026-02-13 03:37:28.811005 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-13 03:37:28.811015 | orchestrator | Friday 13 February 2026 03:37:01 +0000 (0:00:21.919) 0:04:22.622 ******* 2026-02-13 03:37:28.811026 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:28.811037 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:28.811048 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:28.811059 | orchestrator | 2026-02-13 03:37:28.811069 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-13 03:37:28.811080 | orchestrator | Friday 13 February 2026 03:37:10 +0000 (0:00:09.318) 0:04:31.940 ******* 2026-02-13 03:37:28.811091 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.811101 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.811112 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.811131 | orchestrator | 2026-02-13 03:37:28.811142 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-13 03:37:28.811153 | orchestrator | Friday 13 February 2026 03:37:11 +0000 (0:00:00.284) 0:04:32.224 ******* 2026-02-13 03:37:28.811184 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-13 03:37:28.811198 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-13 03:37:28.811210 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-13 03:37:28.811223 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-13 03:37:28.811234 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-13 03:37:28.811246 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__83990f8c9f9bfebd6fd2040dd40be0003f95bbe6'}])  2026-02-13 03:37:28.811257 | orchestrator | 2026-02-13 03:37:28.811268 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:37:28.811280 | orchestrator | Friday 13 February 2026 03:37:25 +0000 (0:00:14.473) 0:04:46.697 ******* 2026-02-13 03:37:28.811291 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.811301 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.811312 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.811323 | orchestrator | 2026-02-13 03:37:28.811334 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-13 03:37:28.811345 | orchestrator | Friday 13 February 2026 03:37:25 +0000 (0:00:00.317) 0:04:47.015 ******* 2026-02-13 03:37:28.811356 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:28.811366 | orchestrator | 2026-02-13 03:37:28.811377 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-13 03:37:28.811388 | orchestrator | Friday 13 February 2026 03:37:26 +0000 (0:00:00.604) 0:04:47.619 ******* 2026-02-13 03:37:28.811398 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:28.811409 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:28.811420 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:28.811431 | orchestrator | 2026-02-13 03:37:28.811441 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-13 03:37:28.811464 | orchestrator | Friday 13 February 2026 03:37:26 +0000 (0:00:00.300) 0:04:47.920 ******* 2026-02-13 03:37:28.811527 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.811550 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:28.811568 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:28.811581 | orchestrator | 2026-02-13 03:37:28.811592 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-13 03:37:28.811603 | orchestrator | Friday 13 February 2026 03:37:27 +0000 (0:00:00.347) 0:04:48.267 ******* 2026-02-13 03:37:28.811614 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 03:37:28.811625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 03:37:28.811636 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 03:37:28.811646 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:28.811657 | orchestrator | 2026-02-13 03:37:28.811668 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-13 03:37:28.811678 | orchestrator | Friday 13 February 2026 03:37:28 +0000 (0:00:00.764) 0:04:49.032 ******* 2026-02-13 03:37:28.811689 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:28.811699 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:28.811710 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:28.811721 | orchestrator | 2026-02-13 03:37:28.811732 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-13 03:37:28.811742 | orchestrator | 2026-02-13 03:37:28.811762 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:37:54.575955 | orchestrator | Friday 13 February 2026 03:37:28 +0000 (0:00:00.777) 0:04:49.809 ******* 2026-02-13 03:37:54.576067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:54.576085 | orchestrator | 2026-02-13 03:37:54.576098 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:37:54.576110 | orchestrator | Friday 13 February 2026 03:37:29 +0000 (0:00:00.521) 0:04:50.331 ******* 2026-02-13 03:37:54.576121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:54.576132 | orchestrator | 2026-02-13 03:37:54.576143 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:37:54.576154 | orchestrator | Friday 13 February 2026 03:37:30 +0000 (0:00:00.723) 0:04:51.055 ******* 2026-02-13 03:37:54.576165 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.576177 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.576187 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.576198 | orchestrator | 2026-02-13 03:37:54.576209 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:37:54.576220 | orchestrator | Friday 13 February 2026 03:37:30 +0000 (0:00:00.790) 0:04:51.845 ******* 2026-02-13 03:37:54.576231 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576243 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576253 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576264 | orchestrator | 2026-02-13 03:37:54.576275 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:37:54.576286 | orchestrator | Friday 13 February 2026 03:37:31 +0000 (0:00:00.315) 0:04:52.161 ******* 2026-02-13 03:37:54.576296 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576307 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576318 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576329 | orchestrator | 2026-02-13 03:37:54.576340 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:37:54.576350 | orchestrator | Friday 13 February 2026 03:37:31 +0000 (0:00:00.558) 0:04:52.719 ******* 2026-02-13 03:37:54.576361 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576373 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576409 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576420 | orchestrator | 2026-02-13 03:37:54.576431 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:37:54.576442 | orchestrator | Friday 13 February 2026 03:37:32 +0000 (0:00:00.337) 0:04:53.057 ******* 2026-02-13 03:37:54.576452 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.576463 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.576474 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.576484 | orchestrator | 2026-02-13 03:37:54.576528 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:37:54.576541 | orchestrator | Friday 13 February 2026 03:37:32 +0000 (0:00:00.764) 0:04:53.821 ******* 2026-02-13 03:37:54.576554 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576566 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576579 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576591 | orchestrator | 2026-02-13 03:37:54.576603 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:37:54.576615 | orchestrator | Friday 13 February 2026 03:37:33 +0000 (0:00:00.328) 0:04:54.149 ******* 2026-02-13 03:37:54.576627 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576640 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576652 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576664 | orchestrator | 2026-02-13 03:37:54.576676 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:37:54.576688 | orchestrator | Friday 13 February 2026 03:37:33 +0000 (0:00:00.565) 0:04:54.715 ******* 2026-02-13 03:37:54.576701 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.576713 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.576726 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.576738 | orchestrator | 2026-02-13 03:37:54.576750 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:37:54.576763 | orchestrator | Friday 13 February 2026 03:37:34 +0000 (0:00:00.778) 0:04:55.494 ******* 2026-02-13 03:37:54.576774 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.576787 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.576799 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.576812 | orchestrator | 2026-02-13 03:37:54.576824 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:37:54.576836 | orchestrator | Friday 13 February 2026 03:37:35 +0000 (0:00:00.713) 0:04:56.207 ******* 2026-02-13 03:37:54.576849 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.576860 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.576886 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.576898 | orchestrator | 2026-02-13 03:37:54.576909 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:37:54.576920 | orchestrator | Friday 13 February 2026 03:37:35 +0000 (0:00:00.334) 0:04:56.542 ******* 2026-02-13 03:37:54.576930 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.576941 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.576951 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.576962 | orchestrator | 2026-02-13 03:37:54.576973 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:37:54.576983 | orchestrator | Friday 13 February 2026 03:37:36 +0000 (0:00:00.619) 0:04:57.162 ******* 2026-02-13 03:37:54.576994 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577005 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577023 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577041 | orchestrator | 2026-02-13 03:37:54.577059 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:37:54.577086 | orchestrator | Friday 13 February 2026 03:37:36 +0000 (0:00:00.318) 0:04:57.480 ******* 2026-02-13 03:37:54.577105 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577123 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577141 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577158 | orchestrator | 2026-02-13 03:37:54.577212 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:37:54.577231 | orchestrator | Friday 13 February 2026 03:37:36 +0000 (0:00:00.337) 0:04:57.817 ******* 2026-02-13 03:37:54.577251 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577270 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577287 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577305 | orchestrator | 2026-02-13 03:37:54.577323 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:37:54.577342 | orchestrator | Friday 13 February 2026 03:37:37 +0000 (0:00:00.310) 0:04:58.128 ******* 2026-02-13 03:37:54.577359 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577378 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577390 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577401 | orchestrator | 2026-02-13 03:37:54.577412 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:37:54.577423 | orchestrator | Friday 13 February 2026 03:37:37 +0000 (0:00:00.632) 0:04:58.760 ******* 2026-02-13 03:37:54.577433 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577444 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577455 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577465 | orchestrator | 2026-02-13 03:37:54.577476 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:37:54.577515 | orchestrator | Friday 13 February 2026 03:37:38 +0000 (0:00:00.373) 0:04:59.134 ******* 2026-02-13 03:37:54.577530 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.577541 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.577552 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.577562 | orchestrator | 2026-02-13 03:37:54.577574 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:37:54.577584 | orchestrator | Friday 13 February 2026 03:37:38 +0000 (0:00:00.356) 0:04:59.490 ******* 2026-02-13 03:37:54.577595 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.577605 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.577616 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.577627 | orchestrator | 2026-02-13 03:37:54.577638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:37:54.577648 | orchestrator | Friday 13 February 2026 03:37:38 +0000 (0:00:00.338) 0:04:59.829 ******* 2026-02-13 03:37:54.577659 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.577670 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.577680 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.577691 | orchestrator | 2026-02-13 03:37:54.577702 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-13 03:37:54.577712 | orchestrator | Friday 13 February 2026 03:37:39 +0000 (0:00:00.702) 0:05:00.531 ******* 2026-02-13 03:37:54.577723 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 03:37:54.577734 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:37:54.577746 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:37:54.577756 | orchestrator | 2026-02-13 03:37:54.577767 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-13 03:37:54.577778 | orchestrator | Friday 13 February 2026 03:37:40 +0000 (0:00:00.592) 0:05:01.124 ******* 2026-02-13 03:37:54.577788 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:37:54.577800 | orchestrator | 2026-02-13 03:37:54.577811 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-13 03:37:54.577821 | orchestrator | Friday 13 February 2026 03:37:40 +0000 (0:00:00.620) 0:05:01.744 ******* 2026-02-13 03:37:54.577832 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:37:54.577843 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:37:54.577853 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:37:54.577864 | orchestrator | 2026-02-13 03:37:54.577875 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-13 03:37:54.577895 | orchestrator | Friday 13 February 2026 03:37:41 +0000 (0:00:00.661) 0:05:02.406 ******* 2026-02-13 03:37:54.577906 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:37:54.577917 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:37:54.577927 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:37:54.577938 | orchestrator | 2026-02-13 03:37:54.577949 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-13 03:37:54.577960 | orchestrator | Friday 13 February 2026 03:37:41 +0000 (0:00:00.313) 0:05:02.720 ******* 2026-02-13 03:37:54.577970 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:37:54.577982 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:37:54.577993 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:37:54.578003 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-13 03:37:54.578014 | orchestrator | 2026-02-13 03:37:54.578133 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-13 03:37:54.578145 | orchestrator | Friday 13 February 2026 03:37:51 +0000 (0:00:10.288) 0:05:13.008 ******* 2026-02-13 03:37:54.578156 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:37:54.578167 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:37:54.578178 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:37:54.578188 | orchestrator | 2026-02-13 03:37:54.578199 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-13 03:37:54.578210 | orchestrator | Friday 13 February 2026 03:37:52 +0000 (0:00:00.312) 0:05:13.321 ******* 2026-02-13 03:37:54.578221 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-13 03:37:54.578232 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-13 03:37:54.578242 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-13 03:37:54.578253 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-13 03:37:54.578264 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:37:54.578275 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:37:54.578285 | orchestrator | 2026-02-13 03:37:54.578296 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-13 03:37:54.578317 | orchestrator | Friday 13 February 2026 03:37:54 +0000 (0:00:02.258) 0:05:15.579 ******* 2026-02-13 03:38:49.136742 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-13 03:38:49.136826 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-13 03:38:49.136839 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-13 03:38:49.136849 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:38:49.136858 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-13 03:38:49.136867 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-13 03:38:49.136876 | orchestrator | 2026-02-13 03:38:49.136886 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-13 03:38:49.136896 | orchestrator | Friday 13 February 2026 03:37:55 +0000 (0:00:01.186) 0:05:16.766 ******* 2026-02-13 03:38:49.136905 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:38:49.136914 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:38:49.136922 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:38:49.136931 | orchestrator | 2026-02-13 03:38:49.136940 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-13 03:38:49.136948 | orchestrator | Friday 13 February 2026 03:37:56 +0000 (0:00:00.678) 0:05:17.445 ******* 2026-02-13 03:38:49.136957 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.136967 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:38:49.136975 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:38:49.136984 | orchestrator | 2026-02-13 03:38:49.136993 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-13 03:38:49.137002 | orchestrator | Friday 13 February 2026 03:37:56 +0000 (0:00:00.270) 0:05:17.715 ******* 2026-02-13 03:38:49.137029 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.137039 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:38:49.137048 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:38:49.137057 | orchestrator | 2026-02-13 03:38:49.137065 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-13 03:38:49.137074 | orchestrator | Friday 13 February 2026 03:37:57 +0000 (0:00:00.394) 0:05:18.110 ******* 2026-02-13 03:38:49.137083 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:38:49.137092 | orchestrator | 2026-02-13 03:38:49.137101 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-13 03:38:49.137110 | orchestrator | Friday 13 February 2026 03:37:57 +0000 (0:00:00.523) 0:05:18.633 ******* 2026-02-13 03:38:49.137119 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.137127 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:38:49.137136 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:38:49.137145 | orchestrator | 2026-02-13 03:38:49.137153 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-13 03:38:49.137162 | orchestrator | Friday 13 February 2026 03:37:57 +0000 (0:00:00.304) 0:05:18.938 ******* 2026-02-13 03:38:49.137171 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.137179 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:38:49.137188 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:38:49.137197 | orchestrator | 2026-02-13 03:38:49.137205 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-13 03:38:49.137214 | orchestrator | Friday 13 February 2026 03:37:58 +0000 (0:00:00.471) 0:05:19.410 ******* 2026-02-13 03:38:49.137223 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:38:49.137232 | orchestrator | 2026-02-13 03:38:49.137241 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-13 03:38:49.137250 | orchestrator | Friday 13 February 2026 03:37:58 +0000 (0:00:00.497) 0:05:19.908 ******* 2026-02-13 03:38:49.137259 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.137267 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.137276 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.137285 | orchestrator | 2026-02-13 03:38:49.137294 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-13 03:38:49.137302 | orchestrator | Friday 13 February 2026 03:38:00 +0000 (0:00:01.245) 0:05:21.153 ******* 2026-02-13 03:38:49.137311 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.137321 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.137337 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.137353 | orchestrator | 2026-02-13 03:38:49.137369 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-13 03:38:49.137385 | orchestrator | Friday 13 February 2026 03:38:01 +0000 (0:00:01.315) 0:05:22.469 ******* 2026-02-13 03:38:49.137401 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.137418 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.137434 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.137446 | orchestrator | 2026-02-13 03:38:49.137456 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-13 03:38:49.137478 | orchestrator | Friday 13 February 2026 03:38:03 +0000 (0:00:01.928) 0:05:24.397 ******* 2026-02-13 03:38:49.137489 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.137499 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.137535 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.137545 | orchestrator | 2026-02-13 03:38:49.137556 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-13 03:38:49.137566 | orchestrator | Friday 13 February 2026 03:38:05 +0000 (0:00:01.987) 0:05:26.385 ******* 2026-02-13 03:38:49.137576 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.137585 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:38:49.137595 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-13 03:38:49.137613 | orchestrator | 2026-02-13 03:38:49.137622 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-13 03:38:49.137632 | orchestrator | Friday 13 February 2026 03:38:05 +0000 (0:00:00.528) 0:05:26.913 ******* 2026-02-13 03:38:49.137642 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-13 03:38:49.137652 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-13 03:38:49.137675 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-13 03:38:49.137686 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-13 03:38:49.137696 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:38:49.137706 | orchestrator | 2026-02-13 03:38:49.137715 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-13 03:38:49.137724 | orchestrator | Friday 13 February 2026 03:38:30 +0000 (0:00:24.145) 0:05:51.059 ******* 2026-02-13 03:38:49.137733 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:38:49.137742 | orchestrator | 2026-02-13 03:38:49.137750 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-13 03:38:49.137759 | orchestrator | Friday 13 February 2026 03:38:31 +0000 (0:00:01.333) 0:05:52.393 ******* 2026-02-13 03:38:49.137767 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:38:49.137776 | orchestrator | 2026-02-13 03:38:49.137785 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-13 03:38:49.137794 | orchestrator | Friday 13 February 2026 03:38:31 +0000 (0:00:00.332) 0:05:52.725 ******* 2026-02-13 03:38:49.137802 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:38:49.137811 | orchestrator | 2026-02-13 03:38:49.137819 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-13 03:38:49.137828 | orchestrator | Friday 13 February 2026 03:38:31 +0000 (0:00:00.160) 0:05:52.885 ******* 2026-02-13 03:38:49.137837 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-13 03:38:49.137845 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-13 03:38:49.137854 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-13 03:38:49.137862 | orchestrator | 2026-02-13 03:38:49.137871 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-13 03:38:49.137880 | orchestrator | Friday 13 February 2026 03:38:38 +0000 (0:00:06.290) 0:05:59.176 ******* 2026-02-13 03:38:49.137889 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-13 03:38:49.137897 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-13 03:38:49.137906 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-13 03:38:49.137915 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-13 03:38:49.137923 | orchestrator | 2026-02-13 03:38:49.137932 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:38:49.137941 | orchestrator | Friday 13 February 2026 03:38:43 +0000 (0:00:05.092) 0:06:04.269 ******* 2026-02-13 03:38:49.137949 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.137958 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.137967 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.137976 | orchestrator | 2026-02-13 03:38:49.137985 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-13 03:38:49.137993 | orchestrator | Friday 13 February 2026 03:38:43 +0000 (0:00:00.680) 0:06:04.950 ******* 2026-02-13 03:38:49.138002 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:38:49.138011 | orchestrator | 2026-02-13 03:38:49.138069 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-13 03:38:49.138079 | orchestrator | Friday 13 February 2026 03:38:44 +0000 (0:00:00.596) 0:06:05.546 ******* 2026-02-13 03:38:49.138088 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:38:49.138096 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:38:49.138105 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:38:49.138113 | orchestrator | 2026-02-13 03:38:49.138122 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-13 03:38:49.138131 | orchestrator | Friday 13 February 2026 03:38:45 +0000 (0:00:00.642) 0:06:06.188 ******* 2026-02-13 03:38:49.138139 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:38:49.138148 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:38:49.138156 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:38:49.138165 | orchestrator | 2026-02-13 03:38:49.138173 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-13 03:38:49.138182 | orchestrator | Friday 13 February 2026 03:38:46 +0000 (0:00:01.229) 0:06:07.418 ******* 2026-02-13 03:38:49.138191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 03:38:49.138199 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 03:38:49.138212 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 03:38:49.138221 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:38:49.138230 | orchestrator | 2026-02-13 03:38:49.138239 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-13 03:38:49.138248 | orchestrator | Friday 13 February 2026 03:38:47 +0000 (0:00:00.621) 0:06:08.039 ******* 2026-02-13 03:38:49.138256 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:38:49.138265 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:38:49.138273 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:38:49.138282 | orchestrator | 2026-02-13 03:38:49.138290 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-13 03:38:49.138299 | orchestrator | 2026-02-13 03:38:49.138308 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:38:49.138316 | orchestrator | Friday 13 February 2026 03:38:47 +0000 (0:00:00.547) 0:06:08.587 ******* 2026-02-13 03:38:49.138325 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:38:49.138335 | orchestrator | 2026-02-13 03:38:49.138344 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:38:49.138352 | orchestrator | Friday 13 February 2026 03:38:48 +0000 (0:00:00.822) 0:06:09.410 ******* 2026-02-13 03:38:49.138368 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:39:04.625646 | orchestrator | 2026-02-13 03:39:04.625760 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:39:04.625785 | orchestrator | Friday 13 February 2026 03:38:49 +0000 (0:00:00.733) 0:06:10.143 ******* 2026-02-13 03:39:04.625805 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.625824 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.625843 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.625860 | orchestrator | 2026-02-13 03:39:04.625879 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:39:04.625897 | orchestrator | Friday 13 February 2026 03:38:49 +0000 (0:00:00.321) 0:06:10.464 ******* 2026-02-13 03:39:04.625917 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.625935 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.625955 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.625972 | orchestrator | 2026-02-13 03:39:04.625990 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:39:04.626002 | orchestrator | Friday 13 February 2026 03:38:50 +0000 (0:00:00.695) 0:06:11.159 ******* 2026-02-13 03:39:04.626013 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626089 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626127 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626139 | orchestrator | 2026-02-13 03:39:04.626152 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:39:04.626165 | orchestrator | Friday 13 February 2026 03:38:50 +0000 (0:00:00.702) 0:06:11.862 ******* 2026-02-13 03:39:04.626178 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626191 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626204 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626217 | orchestrator | 2026-02-13 03:39:04.626229 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:39:04.626242 | orchestrator | Friday 13 February 2026 03:38:51 +0000 (0:00:00.951) 0:06:12.813 ******* 2026-02-13 03:39:04.626256 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.626269 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.626282 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.626295 | orchestrator | 2026-02-13 03:39:04.626307 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:39:04.626319 | orchestrator | Friday 13 February 2026 03:38:52 +0000 (0:00:00.331) 0:06:13.145 ******* 2026-02-13 03:39:04.626332 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.626345 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.626357 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.626370 | orchestrator | 2026-02-13 03:39:04.626382 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:39:04.626395 | orchestrator | Friday 13 February 2026 03:38:52 +0000 (0:00:00.335) 0:06:13.481 ******* 2026-02-13 03:39:04.626408 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.626421 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.626433 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.626446 | orchestrator | 2026-02-13 03:39:04.626459 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:39:04.626472 | orchestrator | Friday 13 February 2026 03:38:52 +0000 (0:00:00.320) 0:06:13.801 ******* 2026-02-13 03:39:04.626485 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626526 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626539 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626551 | orchestrator | 2026-02-13 03:39:04.626563 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:39:04.626574 | orchestrator | Friday 13 February 2026 03:38:53 +0000 (0:00:00.927) 0:06:14.728 ******* 2026-02-13 03:39:04.626585 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626596 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626606 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626617 | orchestrator | 2026-02-13 03:39:04.626628 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:39:04.626639 | orchestrator | Friday 13 February 2026 03:38:54 +0000 (0:00:00.705) 0:06:15.434 ******* 2026-02-13 03:39:04.626649 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.626660 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.626671 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.626682 | orchestrator | 2026-02-13 03:39:04.626693 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:39:04.626704 | orchestrator | Friday 13 February 2026 03:38:54 +0000 (0:00:00.312) 0:06:15.746 ******* 2026-02-13 03:39:04.626714 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.626725 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.626736 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.626747 | orchestrator | 2026-02-13 03:39:04.626758 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:39:04.626782 | orchestrator | Friday 13 February 2026 03:38:55 +0000 (0:00:00.332) 0:06:16.078 ******* 2026-02-13 03:39:04.626793 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626804 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626815 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626826 | orchestrator | 2026-02-13 03:39:04.626845 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:39:04.626856 | orchestrator | Friday 13 February 2026 03:38:55 +0000 (0:00:00.604) 0:06:16.683 ******* 2026-02-13 03:39:04.626866 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626877 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626889 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626899 | orchestrator | 2026-02-13 03:39:04.626910 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:39:04.626921 | orchestrator | Friday 13 February 2026 03:38:56 +0000 (0:00:00.341) 0:06:17.025 ******* 2026-02-13 03:39:04.626932 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.626949 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.626967 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.626985 | orchestrator | 2026-02-13 03:39:04.627002 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:39:04.627019 | orchestrator | Friday 13 February 2026 03:38:56 +0000 (0:00:00.359) 0:06:17.385 ******* 2026-02-13 03:39:04.627036 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.627054 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.627071 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.627090 | orchestrator | 2026-02-13 03:39:04.627107 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:39:04.627152 | orchestrator | Friday 13 February 2026 03:38:56 +0000 (0:00:00.321) 0:06:17.706 ******* 2026-02-13 03:39:04.627170 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.627185 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.627196 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.627207 | orchestrator | 2026-02-13 03:39:04.627218 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:39:04.627229 | orchestrator | Friday 13 February 2026 03:38:57 +0000 (0:00:00.561) 0:06:18.268 ******* 2026-02-13 03:39:04.627240 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.627250 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.627261 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.627272 | orchestrator | 2026-02-13 03:39:04.627282 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:39:04.627293 | orchestrator | Friday 13 February 2026 03:38:57 +0000 (0:00:00.333) 0:06:18.601 ******* 2026-02-13 03:39:04.627304 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.627315 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.627325 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.627336 | orchestrator | 2026-02-13 03:39:04.627347 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:39:04.627358 | orchestrator | Friday 13 February 2026 03:38:57 +0000 (0:00:00.349) 0:06:18.951 ******* 2026-02-13 03:39:04.627368 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.627379 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.627390 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.627400 | orchestrator | 2026-02-13 03:39:04.627411 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-13 03:39:04.627422 | orchestrator | Friday 13 February 2026 03:38:58 +0000 (0:00:00.810) 0:06:19.761 ******* 2026-02-13 03:39:04.627432 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.627443 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.627453 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.627464 | orchestrator | 2026-02-13 03:39:04.627475 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-13 03:39:04.627486 | orchestrator | Friday 13 February 2026 03:38:59 +0000 (0:00:00.345) 0:06:20.107 ******* 2026-02-13 03:39:04.627540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:39:04.627560 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:39:04.627574 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:39:04.627596 | orchestrator | 2026-02-13 03:39:04.627607 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-13 03:39:04.627618 | orchestrator | Friday 13 February 2026 03:38:59 +0000 (0:00:00.657) 0:06:20.764 ******* 2026-02-13 03:39:04.627629 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:39:04.627640 | orchestrator | 2026-02-13 03:39:04.627652 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-13 03:39:04.627662 | orchestrator | Friday 13 February 2026 03:39:00 +0000 (0:00:00.554) 0:06:21.319 ******* 2026-02-13 03:39:04.627673 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.627684 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.627695 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.627706 | orchestrator | 2026-02-13 03:39:04.627717 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-13 03:39:04.627728 | orchestrator | Friday 13 February 2026 03:39:00 +0000 (0:00:00.573) 0:06:21.892 ******* 2026-02-13 03:39:04.627739 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:39:04.627750 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:39:04.627761 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:39:04.627771 | orchestrator | 2026-02-13 03:39:04.627783 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-13 03:39:04.627793 | orchestrator | Friday 13 February 2026 03:39:01 +0000 (0:00:00.315) 0:06:22.207 ******* 2026-02-13 03:39:04.627804 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.627815 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.627826 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.627837 | orchestrator | 2026-02-13 03:39:04.627848 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-13 03:39:04.627859 | orchestrator | Friday 13 February 2026 03:39:01 +0000 (0:00:00.632) 0:06:22.839 ******* 2026-02-13 03:39:04.627870 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:39:04.627880 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:39:04.627891 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:39:04.627902 | orchestrator | 2026-02-13 03:39:04.627920 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-13 03:39:04.627932 | orchestrator | Friday 13 February 2026 03:39:02 +0000 (0:00:00.580) 0:06:23.420 ******* 2026-02-13 03:39:04.627943 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-13 03:39:04.627955 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-13 03:39:04.627966 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-13 03:39:04.627976 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-13 03:39:04.627988 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-13 03:39:04.627999 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-13 03:39:04.628010 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-13 03:39:04.628020 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-13 03:39:04.628031 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-13 03:39:04.628050 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-13 03:40:12.152097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-13 03:40:12.152184 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-13 03:40:12.152192 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-13 03:40:12.152199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-13 03:40:12.152220 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-13 03:40:12.152226 | orchestrator | 2026-02-13 03:40:12.152233 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-13 03:40:12.152238 | orchestrator | Friday 13 February 2026 03:39:04 +0000 (0:00:02.204) 0:06:25.624 ******* 2026-02-13 03:40:12.152244 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:12.152250 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:12.152255 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:12.152261 | orchestrator | 2026-02-13 03:40:12.152266 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-13 03:40:12.152271 | orchestrator | Friday 13 February 2026 03:39:04 +0000 (0:00:00.304) 0:06:25.929 ******* 2026-02-13 03:40:12.152277 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:40:12.152283 | orchestrator | 2026-02-13 03:40:12.152288 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-13 03:40:12.152293 | orchestrator | Friday 13 February 2026 03:39:05 +0000 (0:00:00.815) 0:06:26.744 ******* 2026-02-13 03:40:12.152298 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-13 03:40:12.152304 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-13 03:40:12.152309 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-13 03:40:12.152315 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-13 03:40:12.152320 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-13 03:40:12.152325 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-13 03:40:12.152330 | orchestrator | 2026-02-13 03:40:12.152336 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-13 03:40:12.152341 | orchestrator | Friday 13 February 2026 03:39:06 +0000 (0:00:01.036) 0:06:27.781 ******* 2026-02-13 03:40:12.152346 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:40:12.152351 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:40:12.152356 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:40:12.152361 | orchestrator | 2026-02-13 03:40:12.152366 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-13 03:40:12.152371 | orchestrator | Friday 13 February 2026 03:39:08 +0000 (0:00:01.954) 0:06:29.736 ******* 2026-02-13 03:40:12.152377 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 03:40:12.152382 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:40:12.152387 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:40:12.152392 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 03:40:12.152397 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-13 03:40:12.152402 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:40:12.152407 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 03:40:12.152412 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-13 03:40:12.152417 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:40:12.152422 | orchestrator | 2026-02-13 03:40:12.152427 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-13 03:40:12.152433 | orchestrator | Friday 13 February 2026 03:39:09 +0000 (0:00:01.114) 0:06:30.851 ******* 2026-02-13 03:40:12.152438 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:40:12.152443 | orchestrator | 2026-02-13 03:40:12.152448 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-13 03:40:12.152453 | orchestrator | Friday 13 February 2026 03:39:11 +0000 (0:00:02.046) 0:06:32.897 ******* 2026-02-13 03:40:12.152493 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:40:12.152500 | orchestrator | 2026-02-13 03:40:12.152510 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-13 03:40:12.152515 | orchestrator | Friday 13 February 2026 03:39:12 +0000 (0:00:00.820) 0:06:33.718 ******* 2026-02-13 03:40:12.152521 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 03:40:12.152528 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 03:40:12.152533 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 03:40:12.152538 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 03:40:12.152543 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 03:40:12.152558 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 03:40:12.152563 | orchestrator | 2026-02-13 03:40:12.152568 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-13 03:40:12.152573 | orchestrator | Friday 13 February 2026 03:39:54 +0000 (0:00:42.059) 0:07:15.777 ******* 2026-02-13 03:40:12.152579 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:12.152584 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:12.152589 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:12.152594 | orchestrator | 2026-02-13 03:40:12.152599 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-13 03:40:12.152604 | orchestrator | Friday 13 February 2026 03:39:55 +0000 (0:00:00.317) 0:07:16.095 ******* 2026-02-13 03:40:12.152609 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:40:12.152614 | orchestrator | 2026-02-13 03:40:12.152619 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-13 03:40:12.152624 | orchestrator | Friday 13 February 2026 03:39:55 +0000 (0:00:00.817) 0:07:16.913 ******* 2026-02-13 03:40:12.152629 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:40:12.152634 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:40:12.152639 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:40:12.152644 | orchestrator | 2026-02-13 03:40:12.152649 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-13 03:40:12.152654 | orchestrator | Friday 13 February 2026 03:39:56 +0000 (0:00:00.660) 0:07:17.574 ******* 2026-02-13 03:40:12.152660 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:40:12.152665 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:40:12.152670 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:40:12.152675 | orchestrator | 2026-02-13 03:40:12.152680 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-13 03:40:12.152685 | orchestrator | Friday 13 February 2026 03:39:59 +0000 (0:00:02.512) 0:07:20.086 ******* 2026-02-13 03:40:12.152690 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:40:12.152695 | orchestrator | 2026-02-13 03:40:12.152700 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-13 03:40:12.152705 | orchestrator | Friday 13 February 2026 03:39:59 +0000 (0:00:00.795) 0:07:20.881 ******* 2026-02-13 03:40:12.152710 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:40:12.152715 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:40:12.152720 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:40:12.152725 | orchestrator | 2026-02-13 03:40:12.152731 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-13 03:40:12.152736 | orchestrator | Friday 13 February 2026 03:40:01 +0000 (0:00:01.211) 0:07:22.092 ******* 2026-02-13 03:40:12.152745 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:40:12.152750 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:40:12.152755 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:40:12.152760 | orchestrator | 2026-02-13 03:40:12.152765 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-13 03:40:12.152770 | orchestrator | Friday 13 February 2026 03:40:02 +0000 (0:00:01.282) 0:07:23.375 ******* 2026-02-13 03:40:12.152775 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:40:12.152780 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:40:12.152785 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:40:12.152790 | orchestrator | 2026-02-13 03:40:12.152795 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-13 03:40:12.152800 | orchestrator | Friday 13 February 2026 03:40:04 +0000 (0:00:01.911) 0:07:25.287 ******* 2026-02-13 03:40:12.152805 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:12.152810 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:12.152815 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:12.152820 | orchestrator | 2026-02-13 03:40:12.152825 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-13 03:40:12.152830 | orchestrator | Friday 13 February 2026 03:40:04 +0000 (0:00:00.370) 0:07:25.657 ******* 2026-02-13 03:40:12.152835 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:12.152840 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:12.152845 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:12.152850 | orchestrator | 2026-02-13 03:40:12.152855 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-13 03:40:12.152860 | orchestrator | Friday 13 February 2026 03:40:04 +0000 (0:00:00.350) 0:07:26.007 ******* 2026-02-13 03:40:12.152866 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-02-13 03:40:12.152875 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-13 03:40:12.152881 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-13 03:40:12.152886 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 03:40:12.152891 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-02-13 03:40:12.152896 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-13 03:40:12.152901 | orchestrator | 2026-02-13 03:40:12.152906 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-13 03:40:12.152911 | orchestrator | Friday 13 February 2026 03:40:06 +0000 (0:00:01.023) 0:07:27.031 ******* 2026-02-13 03:40:12.152916 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-13 03:40:12.152921 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-13 03:40:12.152926 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-13 03:40:12.152931 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-13 03:40:12.152936 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-13 03:40:12.152941 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-13 03:40:12.152946 | orchestrator | 2026-02-13 03:40:12.152951 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-13 03:40:12.152956 | orchestrator | Friday 13 February 2026 03:40:08 +0000 (0:00:02.552) 0:07:29.583 ******* 2026-02-13 03:40:12.152961 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-02-13 03:40:12.152966 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-13 03:40:12.152971 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-13 03:40:12.152976 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-13 03:40:12.152985 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-13 03:40:43.274998 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-02-13 03:40:43.275128 | orchestrator | 2026-02-13 03:40:43.275145 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-13 03:40:43.275157 | orchestrator | Friday 13 February 2026 03:40:12 +0000 (0:00:03.572) 0:07:33.155 ******* 2026-02-13 03:40:43.275167 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275178 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275221 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:40:43.275232 | orchestrator | 2026-02-13 03:40:43.275242 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-13 03:40:43.275252 | orchestrator | Friday 13 February 2026 03:40:14 +0000 (0:00:02.461) 0:07:35.617 ******* 2026-02-13 03:40:43.275262 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275271 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275281 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-13 03:40:43.275292 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:40:43.275301 | orchestrator | 2026-02-13 03:40:43.275311 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-13 03:40:43.275321 | orchestrator | Friday 13 February 2026 03:40:27 +0000 (0:00:12.429) 0:07:48.046 ******* 2026-02-13 03:40:43.275330 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275340 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275349 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.275359 | orchestrator | 2026-02-13 03:40:43.275370 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:40:43.275379 | orchestrator | Friday 13 February 2026 03:40:28 +0000 (0:00:01.175) 0:07:49.221 ******* 2026-02-13 03:40:43.275389 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275398 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275408 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.275417 | orchestrator | 2026-02-13 03:40:43.275427 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-13 03:40:43.275436 | orchestrator | Friday 13 February 2026 03:40:28 +0000 (0:00:00.333) 0:07:49.555 ******* 2026-02-13 03:40:43.275446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:40:43.275456 | orchestrator | 2026-02-13 03:40:43.275493 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-13 03:40:43.275512 | orchestrator | Friday 13 February 2026 03:40:29 +0000 (0:00:00.791) 0:07:50.346 ******* 2026-02-13 03:40:43.275523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:40:43.275533 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:40:43.275545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:40:43.275556 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275567 | orchestrator | 2026-02-13 03:40:43.275579 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-13 03:40:43.275590 | orchestrator | Friday 13 February 2026 03:40:29 +0000 (0:00:00.414) 0:07:50.761 ******* 2026-02-13 03:40:43.275600 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275612 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275623 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.275634 | orchestrator | 2026-02-13 03:40:43.275645 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-13 03:40:43.275656 | orchestrator | Friday 13 February 2026 03:40:30 +0000 (0:00:00.330) 0:07:51.092 ******* 2026-02-13 03:40:43.275667 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275678 | orchestrator | 2026-02-13 03:40:43.275689 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-13 03:40:43.275700 | orchestrator | Friday 13 February 2026 03:40:30 +0000 (0:00:00.228) 0:07:51.321 ******* 2026-02-13 03:40:43.275711 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275722 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.275733 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.275743 | orchestrator | 2026-02-13 03:40:43.275754 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-13 03:40:43.275765 | orchestrator | Friday 13 February 2026 03:40:30 +0000 (0:00:00.576) 0:07:51.897 ******* 2026-02-13 03:40:43.275784 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275795 | orchestrator | 2026-02-13 03:40:43.275823 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-13 03:40:43.275836 | orchestrator | Friday 13 February 2026 03:40:31 +0000 (0:00:00.250) 0:07:52.148 ******* 2026-02-13 03:40:43.275847 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275858 | orchestrator | 2026-02-13 03:40:43.275868 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-13 03:40:43.275877 | orchestrator | Friday 13 February 2026 03:40:31 +0000 (0:00:00.266) 0:07:52.414 ******* 2026-02-13 03:40:43.275887 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275896 | orchestrator | 2026-02-13 03:40:43.275906 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-13 03:40:43.275915 | orchestrator | Friday 13 February 2026 03:40:31 +0000 (0:00:00.126) 0:07:52.541 ******* 2026-02-13 03:40:43.275925 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275934 | orchestrator | 2026-02-13 03:40:43.275944 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-13 03:40:43.275953 | orchestrator | Friday 13 February 2026 03:40:31 +0000 (0:00:00.244) 0:07:52.786 ******* 2026-02-13 03:40:43.275963 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.275973 | orchestrator | 2026-02-13 03:40:43.275982 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-13 03:40:43.275992 | orchestrator | Friday 13 February 2026 03:40:31 +0000 (0:00:00.231) 0:07:53.017 ******* 2026-02-13 03:40:43.276002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:40:43.276013 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:40:43.276052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:40:43.276078 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.276093 | orchestrator | 2026-02-13 03:40:43.276108 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-13 03:40:43.276123 | orchestrator | Friday 13 February 2026 03:40:32 +0000 (0:00:00.466) 0:07:53.484 ******* 2026-02-13 03:40:43.276137 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.276152 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.276166 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.276181 | orchestrator | 2026-02-13 03:40:43.276196 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-13 03:40:43.276212 | orchestrator | Friday 13 February 2026 03:40:32 +0000 (0:00:00.326) 0:07:53.810 ******* 2026-02-13 03:40:43.276228 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.276242 | orchestrator | 2026-02-13 03:40:43.276256 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-13 03:40:43.276272 | orchestrator | Friday 13 February 2026 03:40:33 +0000 (0:00:00.221) 0:07:54.031 ******* 2026-02-13 03:40:43.276286 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.276300 | orchestrator | 2026-02-13 03:40:43.276316 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-13 03:40:43.276331 | orchestrator | 2026-02-13 03:40:43.276347 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:40:43.276363 | orchestrator | Friday 13 February 2026 03:40:34 +0000 (0:00:01.256) 0:07:55.288 ******* 2026-02-13 03:40:43.276379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:40:43.276397 | orchestrator | 2026-02-13 03:40:43.276413 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:40:43.276428 | orchestrator | Friday 13 February 2026 03:40:35 +0000 (0:00:01.208) 0:07:56.496 ******* 2026-02-13 03:40:43.276443 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:40:43.276506 | orchestrator | 2026-02-13 03:40:43.276524 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:40:43.276538 | orchestrator | Friday 13 February 2026 03:40:36 +0000 (0:00:01.279) 0:07:57.776 ******* 2026-02-13 03:40:43.276554 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.276570 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.276585 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.276600 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:40:43.276616 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:40:43.276630 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:40:43.276645 | orchestrator | 2026-02-13 03:40:43.276660 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:40:43.276674 | orchestrator | Friday 13 February 2026 03:40:38 +0000 (0:00:01.327) 0:07:59.104 ******* 2026-02-13 03:40:43.276689 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:40:43.276704 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:40:43.276719 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:40:43.276734 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:40:43.276748 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:40:43.276763 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:40:43.276778 | orchestrator | 2026-02-13 03:40:43.276793 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:40:43.276901 | orchestrator | Friday 13 February 2026 03:40:38 +0000 (0:00:00.742) 0:07:59.846 ******* 2026-02-13 03:40:43.276918 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:40:43.276933 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:40:43.276947 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:40:43.276962 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:40:43.276977 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:40:43.276991 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:40:43.277006 | orchestrator | 2026-02-13 03:40:43.277021 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:40:43.277035 | orchestrator | Friday 13 February 2026 03:40:39 +0000 (0:00:00.854) 0:08:00.700 ******* 2026-02-13 03:40:43.277050 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:40:43.277065 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:40:43.277080 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:40:43.277094 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:40:43.277109 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:40:43.277124 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:40:43.277138 | orchestrator | 2026-02-13 03:40:43.277162 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:40:43.277177 | orchestrator | Friday 13 February 2026 03:40:40 +0000 (0:00:00.729) 0:08:01.430 ******* 2026-02-13 03:40:43.277192 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.277206 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.277221 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.277236 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:40:43.277251 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:40:43.277266 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:40:43.277280 | orchestrator | 2026-02-13 03:40:43.277295 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:40:43.277310 | orchestrator | Friday 13 February 2026 03:40:41 +0000 (0:00:01.301) 0:08:02.731 ******* 2026-02-13 03:40:43.277325 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.277341 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.277357 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.277372 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:40:43.277388 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:40:43.277403 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:40:43.277418 | orchestrator | 2026-02-13 03:40:43.277434 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:40:43.277449 | orchestrator | Friday 13 February 2026 03:40:42 +0000 (0:00:00.638) 0:08:03.370 ******* 2026-02-13 03:40:43.277465 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:40:43.277560 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:40:43.277575 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:40:43.277590 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:40:43.277620 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541006 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541114 | orchestrator | 2026-02-13 03:41:14.541128 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:41:14.541137 | orchestrator | Friday 13 February 2026 03:40:43 +0000 (0:00:00.911) 0:08:04.281 ******* 2026-02-13 03:41:14.541146 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.541154 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.541162 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.541170 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.541178 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.541186 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.541194 | orchestrator | 2026-02-13 03:41:14.541202 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:41:14.541210 | orchestrator | Friday 13 February 2026 03:40:44 +0000 (0:00:01.083) 0:08:05.365 ******* 2026-02-13 03:41:14.541218 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.541226 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.541234 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.541242 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.541249 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.541257 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.541265 | orchestrator | 2026-02-13 03:41:14.541273 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:41:14.541281 | orchestrator | Friday 13 February 2026 03:40:45 +0000 (0:00:01.361) 0:08:06.727 ******* 2026-02-13 03:41:14.541289 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:14.541297 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:14.541305 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:14.541313 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.541326 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541340 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541353 | orchestrator | 2026-02-13 03:41:14.541367 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:41:14.541380 | orchestrator | Friday 13 February 2026 03:40:46 +0000 (0:00:00.637) 0:08:07.364 ******* 2026-02-13 03:41:14.541394 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:14.541407 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:14.541419 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:14.541432 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.541444 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.541455 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.541466 | orchestrator | 2026-02-13 03:41:14.541498 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:41:14.541510 | orchestrator | Friday 13 February 2026 03:40:47 +0000 (0:00:00.869) 0:08:08.234 ******* 2026-02-13 03:41:14.541521 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.541533 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.541544 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.541556 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.541568 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541582 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541597 | orchestrator | 2026-02-13 03:41:14.541612 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:41:14.541628 | orchestrator | Friday 13 February 2026 03:40:47 +0000 (0:00:00.610) 0:08:08.845 ******* 2026-02-13 03:41:14.541643 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.541657 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.541670 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.541685 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.541698 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541741 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541757 | orchestrator | 2026-02-13 03:41:14.541771 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:41:14.541785 | orchestrator | Friday 13 February 2026 03:40:48 +0000 (0:00:00.831) 0:08:09.676 ******* 2026-02-13 03:41:14.541799 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.541813 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.541826 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.541840 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.541854 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541868 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541882 | orchestrator | 2026-02-13 03:41:14.541896 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:41:14.541909 | orchestrator | Friday 13 February 2026 03:40:49 +0000 (0:00:00.635) 0:08:10.312 ******* 2026-02-13 03:41:14.541918 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:14.541926 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:14.541934 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:14.541942 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.541949 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.541957 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.541965 | orchestrator | 2026-02-13 03:41:14.541973 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:41:14.541982 | orchestrator | Friday 13 February 2026 03:40:50 +0000 (0:00:00.819) 0:08:11.132 ******* 2026-02-13 03:41:14.541990 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:14.541997 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:14.542005 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:14.542071 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:41:14.542082 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:41:14.542089 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:41:14.542097 | orchestrator | 2026-02-13 03:41:14.542105 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:41:14.542113 | orchestrator | Friday 13 February 2026 03:40:50 +0000 (0:00:00.609) 0:08:11.741 ******* 2026-02-13 03:41:14.542121 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:14.542129 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:14.542137 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:14.542145 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.542153 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.542190 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.542199 | orchestrator | 2026-02-13 03:41:14.542208 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:41:14.542216 | orchestrator | Friday 13 February 2026 03:40:51 +0000 (0:00:00.857) 0:08:12.598 ******* 2026-02-13 03:41:14.542224 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.542231 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.542239 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.542247 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.542275 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.542283 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.542291 | orchestrator | 2026-02-13 03:41:14.542299 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:41:14.542307 | orchestrator | Friday 13 February 2026 03:40:52 +0000 (0:00:00.646) 0:08:13.245 ******* 2026-02-13 03:41:14.542315 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.542322 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.542330 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.542380 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.542389 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.542397 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.542405 | orchestrator | 2026-02-13 03:41:14.542413 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-13 03:41:14.542421 | orchestrator | Friday 13 February 2026 03:40:53 +0000 (0:00:01.352) 0:08:14.597 ******* 2026-02-13 03:41:14.542439 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:41:14.542447 | orchestrator | 2026-02-13 03:41:14.542454 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-13 03:41:14.542462 | orchestrator | Friday 13 February 2026 03:40:57 +0000 (0:00:03.830) 0:08:18.428 ******* 2026-02-13 03:41:14.542629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:41:14.542640 | orchestrator | 2026-02-13 03:41:14.542648 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-13 03:41:14.542656 | orchestrator | Friday 13 February 2026 03:40:59 +0000 (0:00:02.466) 0:08:20.894 ******* 2026-02-13 03:41:14.542664 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:41:14.542672 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:41:14.542680 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:41:14.542688 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.542695 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:41:14.542703 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:41:14.542711 | orchestrator | 2026-02-13 03:41:14.542719 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-13 03:41:14.542726 | orchestrator | Friday 13 February 2026 03:41:01 +0000 (0:00:01.506) 0:08:22.401 ******* 2026-02-13 03:41:14.542734 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:41:14.542742 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:41:14.542750 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:41:14.542757 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:41:14.542765 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:41:14.542772 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:41:14.542780 | orchestrator | 2026-02-13 03:41:14.542788 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-13 03:41:14.542796 | orchestrator | Friday 13 February 2026 03:41:02 +0000 (0:00:01.185) 0:08:23.586 ******* 2026-02-13 03:41:14.542805 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:41:14.542814 | orchestrator | 2026-02-13 03:41:14.542822 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-13 03:41:14.542830 | orchestrator | Friday 13 February 2026 03:41:03 +0000 (0:00:01.225) 0:08:24.812 ******* 2026-02-13 03:41:14.542842 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:41:14.542855 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:41:14.542868 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:41:14.542881 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:41:14.542893 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:41:14.542906 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:41:14.542918 | orchestrator | 2026-02-13 03:41:14.542931 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-13 03:41:14.542944 | orchestrator | Friday 13 February 2026 03:41:05 +0000 (0:00:01.551) 0:08:26.364 ******* 2026-02-13 03:41:14.542956 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:41:14.542970 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:41:14.542982 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:41:14.542996 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:41:14.543010 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:41:14.543022 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:41:14.543035 | orchestrator | 2026-02-13 03:41:14.543044 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-13 03:41:14.543052 | orchestrator | Friday 13 February 2026 03:41:09 +0000 (0:00:03.758) 0:08:30.122 ******* 2026-02-13 03:41:14.543067 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:41:14.543076 | orchestrator | 2026-02-13 03:41:14.543084 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-13 03:41:14.543100 | orchestrator | Friday 13 February 2026 03:41:10 +0000 (0:00:01.388) 0:08:31.510 ******* 2026-02-13 03:41:14.543108 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.543116 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.543124 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.543132 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.543139 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:14.543147 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:14.543155 | orchestrator | 2026-02-13 03:41:14.543163 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-13 03:41:14.543170 | orchestrator | Friday 13 February 2026 03:41:11 +0000 (0:00:00.677) 0:08:32.188 ******* 2026-02-13 03:41:14.543176 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:41:14.543183 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:41:14.543190 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:41:14.543196 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:41:14.543203 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:41:14.543209 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:41:14.543216 | orchestrator | 2026-02-13 03:41:14.543223 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-13 03:41:14.543229 | orchestrator | Friday 13 February 2026 03:41:13 +0000 (0:00:02.493) 0:08:34.681 ******* 2026-02-13 03:41:14.543236 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:14.543242 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:14.543249 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:14.543255 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:41:14.543271 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:41:42.157971 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:41:42.158198 | orchestrator | 2026-02-13 03:41:42.158230 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-13 03:41:42.158250 | orchestrator | 2026-02-13 03:41:42.158270 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:41:42.158289 | orchestrator | Friday 13 February 2026 03:41:14 +0000 (0:00:00.872) 0:08:35.554 ******* 2026-02-13 03:41:42.158309 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:41:42.158329 | orchestrator | 2026-02-13 03:41:42.158349 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:41:42.158368 | orchestrator | Friday 13 February 2026 03:41:15 +0000 (0:00:00.765) 0:08:36.319 ******* 2026-02-13 03:41:42.158389 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:41:42.158406 | orchestrator | 2026-02-13 03:41:42.158425 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:41:42.158444 | orchestrator | Friday 13 February 2026 03:41:15 +0000 (0:00:00.549) 0:08:36.869 ******* 2026-02-13 03:41:42.158462 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.158558 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.158577 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.158594 | orchestrator | 2026-02-13 03:41:42.158622 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:41:42.158643 | orchestrator | Friday 13 February 2026 03:41:16 +0000 (0:00:00.542) 0:08:37.412 ******* 2026-02-13 03:41:42.158661 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.158680 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.158699 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.158717 | orchestrator | 2026-02-13 03:41:42.158737 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:41:42.158756 | orchestrator | Friday 13 February 2026 03:41:17 +0000 (0:00:00.701) 0:08:38.113 ******* 2026-02-13 03:41:42.158775 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.158793 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.158812 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.158830 | orchestrator | 2026-02-13 03:41:42.158848 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:41:42.158905 | orchestrator | Friday 13 February 2026 03:41:17 +0000 (0:00:00.708) 0:08:38.822 ******* 2026-02-13 03:41:42.158926 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.158944 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.158961 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.158978 | orchestrator | 2026-02-13 03:41:42.158997 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:41:42.159017 | orchestrator | Friday 13 February 2026 03:41:18 +0000 (0:00:00.976) 0:08:39.798 ******* 2026-02-13 03:41:42.159034 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.159053 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.159072 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.159090 | orchestrator | 2026-02-13 03:41:42.159109 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:41:42.159128 | orchestrator | Friday 13 February 2026 03:41:19 +0000 (0:00:00.325) 0:08:40.124 ******* 2026-02-13 03:41:42.159146 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.159164 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.159182 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.159199 | orchestrator | 2026-02-13 03:41:42.159219 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:41:42.159239 | orchestrator | Friday 13 February 2026 03:41:19 +0000 (0:00:00.327) 0:08:40.451 ******* 2026-02-13 03:41:42.159258 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.159277 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.159295 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.159314 | orchestrator | 2026-02-13 03:41:42.159332 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:41:42.159351 | orchestrator | Friday 13 February 2026 03:41:19 +0000 (0:00:00.332) 0:08:40.784 ******* 2026-02-13 03:41:42.159370 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.159390 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.159408 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.159427 | orchestrator | 2026-02-13 03:41:42.159446 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:41:42.159517 | orchestrator | Friday 13 February 2026 03:41:20 +0000 (0:00:01.025) 0:08:41.809 ******* 2026-02-13 03:41:42.159537 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.159555 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.159573 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.159591 | orchestrator | 2026-02-13 03:41:42.159609 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:41:42.159628 | orchestrator | Friday 13 February 2026 03:41:21 +0000 (0:00:00.743) 0:08:42.553 ******* 2026-02-13 03:41:42.159647 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.159666 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.159685 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.159705 | orchestrator | 2026-02-13 03:41:42.159724 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:41:42.159743 | orchestrator | Friday 13 February 2026 03:41:21 +0000 (0:00:00.360) 0:08:42.913 ******* 2026-02-13 03:41:42.159764 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.159783 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.159801 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.159820 | orchestrator | 2026-02-13 03:41:42.159839 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:41:42.159858 | orchestrator | Friday 13 February 2026 03:41:22 +0000 (0:00:00.318) 0:08:43.232 ******* 2026-02-13 03:41:42.159876 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.159894 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.159912 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.159932 | orchestrator | 2026-02-13 03:41:42.159951 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:41:42.159971 | orchestrator | Friday 13 February 2026 03:41:22 +0000 (0:00:00.588) 0:08:43.820 ******* 2026-02-13 03:41:42.160039 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.160060 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.160080 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.160097 | orchestrator | 2026-02-13 03:41:42.160116 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:41:42.160135 | orchestrator | Friday 13 February 2026 03:41:23 +0000 (0:00:00.341) 0:08:44.162 ******* 2026-02-13 03:41:42.160153 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.160171 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.160190 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.160208 | orchestrator | 2026-02-13 03:41:42.160227 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:41:42.160247 | orchestrator | Friday 13 February 2026 03:41:23 +0000 (0:00:00.350) 0:08:44.512 ******* 2026-02-13 03:41:42.160266 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.160286 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.160304 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.160323 | orchestrator | 2026-02-13 03:41:42.160343 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:41:42.160362 | orchestrator | Friday 13 February 2026 03:41:23 +0000 (0:00:00.308) 0:08:44.821 ******* 2026-02-13 03:41:42.160383 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.160402 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.160422 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.160442 | orchestrator | 2026-02-13 03:41:42.160461 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:41:42.160547 | orchestrator | Friday 13 February 2026 03:41:24 +0000 (0:00:00.574) 0:08:45.396 ******* 2026-02-13 03:41:42.160568 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.160585 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.160604 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.160622 | orchestrator | 2026-02-13 03:41:42.160642 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:41:42.160661 | orchestrator | Friday 13 February 2026 03:41:24 +0000 (0:00:00.346) 0:08:45.743 ******* 2026-02-13 03:41:42.160679 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.160698 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.160716 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.160735 | orchestrator | 2026-02-13 03:41:42.160754 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:41:42.160773 | orchestrator | Friday 13 February 2026 03:41:25 +0000 (0:00:00.336) 0:08:46.079 ******* 2026-02-13 03:41:42.160791 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:41:42.160809 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:41:42.160828 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:41:42.160847 | orchestrator | 2026-02-13 03:41:42.160866 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-13 03:41:42.160884 | orchestrator | Friday 13 February 2026 03:41:25 +0000 (0:00:00.820) 0:08:46.899 ******* 2026-02-13 03:41:42.160902 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:41:42.160921 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:41:42.160940 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-13 03:41:42.160960 | orchestrator | 2026-02-13 03:41:42.160979 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-13 03:41:42.160998 | orchestrator | Friday 13 February 2026 03:41:26 +0000 (0:00:00.425) 0:08:47.325 ******* 2026-02-13 03:41:42.161017 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:41:42.161035 | orchestrator | 2026-02-13 03:41:42.161054 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-13 03:41:42.161073 | orchestrator | Friday 13 February 2026 03:41:28 +0000 (0:00:02.145) 0:08:49.471 ******* 2026-02-13 03:41:42.161093 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-13 03:41:42.161131 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:41:42.161152 | orchestrator | 2026-02-13 03:41:42.161171 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-13 03:41:42.161189 | orchestrator | Friday 13 February 2026 03:41:28 +0000 (0:00:00.238) 0:08:49.710 ******* 2026-02-13 03:41:42.161222 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:41:42.161252 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:41:42.161272 | orchestrator | 2026-02-13 03:41:42.161292 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-13 03:41:42.161311 | orchestrator | Friday 13 February 2026 03:41:36 +0000 (0:00:08.198) 0:08:57.909 ******* 2026-02-13 03:41:42.161329 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 03:41:42.161347 | orchestrator | 2026-02-13 03:41:42.161366 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-13 03:41:42.161384 | orchestrator | Friday 13 February 2026 03:41:40 +0000 (0:00:03.435) 0:09:01.344 ******* 2026-02-13 03:41:42.161403 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:41:42.161423 | orchestrator | 2026-02-13 03:41:42.161442 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-13 03:41:42.161460 | orchestrator | Friday 13 February 2026 03:41:41 +0000 (0:00:00.800) 0:09:02.145 ******* 2026-02-13 03:41:42.161575 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-13 03:42:09.233867 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-13 03:42:09.233991 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-13 03:42:09.234005 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-13 03:42:09.234079 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-13 03:42:09.234094 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-13 03:42:09.234106 | orchestrator | 2026-02-13 03:42:09.234114 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-13 03:42:09.234120 | orchestrator | Friday 13 February 2026 03:41:42 +0000 (0:00:01.022) 0:09:03.168 ******* 2026-02-13 03:42:09.234127 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:09.234134 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:42:09.234141 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:42:09.234147 | orchestrator | 2026-02-13 03:42:09.234154 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-13 03:42:09.234160 | orchestrator | Friday 13 February 2026 03:41:44 +0000 (0:00:02.084) 0:09:05.253 ******* 2026-02-13 03:42:09.234192 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 03:42:09.234201 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:42:09.234207 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234214 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 03:42:09.234221 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-13 03:42:09.234227 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234234 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 03:42:09.234240 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-13 03:42:09.234267 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234273 | orchestrator | 2026-02-13 03:42:09.234280 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-13 03:42:09.234286 | orchestrator | Friday 13 February 2026 03:41:45 +0000 (0:00:01.171) 0:09:06.424 ******* 2026-02-13 03:42:09.234292 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234298 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234304 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234310 | orchestrator | 2026-02-13 03:42:09.234316 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-13 03:42:09.234321 | orchestrator | Friday 13 February 2026 03:41:48 +0000 (0:00:03.036) 0:09:09.461 ******* 2026-02-13 03:42:09.234326 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.234332 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:09.234337 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:09.234342 | orchestrator | 2026-02-13 03:42:09.234348 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-13 03:42:09.234353 | orchestrator | Friday 13 February 2026 03:41:48 +0000 (0:00:00.335) 0:09:09.796 ******* 2026-02-13 03:42:09.234359 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:09.234364 | orchestrator | 2026-02-13 03:42:09.234370 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-13 03:42:09.234375 | orchestrator | Friday 13 February 2026 03:41:49 +0000 (0:00:00.800) 0:09:10.597 ******* 2026-02-13 03:42:09.234380 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:09.234387 | orchestrator | 2026-02-13 03:42:09.234392 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-13 03:42:09.234397 | orchestrator | Friday 13 February 2026 03:41:50 +0000 (0:00:00.552) 0:09:11.150 ******* 2026-02-13 03:42:09.234404 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234410 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234417 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234423 | orchestrator | 2026-02-13 03:42:09.234429 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-13 03:42:09.234448 | orchestrator | Friday 13 February 2026 03:41:51 +0000 (0:00:01.256) 0:09:12.406 ******* 2026-02-13 03:42:09.234454 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234460 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234467 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234491 | orchestrator | 2026-02-13 03:42:09.234499 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-13 03:42:09.234505 | orchestrator | Friday 13 February 2026 03:41:52 +0000 (0:00:01.387) 0:09:13.793 ******* 2026-02-13 03:42:09.234511 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234518 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234525 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234531 | orchestrator | 2026-02-13 03:42:09.234537 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-13 03:42:09.234543 | orchestrator | Friday 13 February 2026 03:41:54 +0000 (0:00:01.791) 0:09:15.584 ******* 2026-02-13 03:42:09.234550 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234556 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234563 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234569 | orchestrator | 2026-02-13 03:42:09.234575 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-13 03:42:09.234582 | orchestrator | Friday 13 February 2026 03:41:57 +0000 (0:00:02.968) 0:09:18.553 ******* 2026-02-13 03:42:09.234588 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.234595 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.234601 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.234607 | orchestrator | 2026-02-13 03:42:09.234613 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:42:09.234625 | orchestrator | Friday 13 February 2026 03:41:59 +0000 (0:00:01.483) 0:09:20.036 ******* 2026-02-13 03:42:09.234632 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234638 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234659 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234665 | orchestrator | 2026-02-13 03:42:09.234671 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-13 03:42:09.234677 | orchestrator | Friday 13 February 2026 03:41:59 +0000 (0:00:00.690) 0:09:20.727 ******* 2026-02-13 03:42:09.234682 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:09.234687 | orchestrator | 2026-02-13 03:42:09.234693 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-13 03:42:09.234698 | orchestrator | Friday 13 February 2026 03:42:00 +0000 (0:00:00.809) 0:09:21.536 ******* 2026-02-13 03:42:09.234703 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.234709 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.234714 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.234719 | orchestrator | 2026-02-13 03:42:09.234725 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-13 03:42:09.234730 | orchestrator | Friday 13 February 2026 03:42:00 +0000 (0:00:00.346) 0:09:21.883 ******* 2026-02-13 03:42:09.234735 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:09.234741 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:09.234746 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:09.234751 | orchestrator | 2026-02-13 03:42:09.234756 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-13 03:42:09.234762 | orchestrator | Friday 13 February 2026 03:42:02 +0000 (0:00:01.213) 0:09:23.097 ******* 2026-02-13 03:42:09.234767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:42:09.234773 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:42:09.234778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:42:09.234783 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.234789 | orchestrator | 2026-02-13 03:42:09.234794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-13 03:42:09.234800 | orchestrator | Friday 13 February 2026 03:42:02 +0000 (0:00:00.924) 0:09:24.021 ******* 2026-02-13 03:42:09.234805 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.234810 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.234815 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.234821 | orchestrator | 2026-02-13 03:42:09.234826 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-13 03:42:09.234831 | orchestrator | 2026-02-13 03:42:09.234837 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 03:42:09.234842 | orchestrator | Friday 13 February 2026 03:42:03 +0000 (0:00:00.838) 0:09:24.860 ******* 2026-02-13 03:42:09.234848 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:09.234854 | orchestrator | 2026-02-13 03:42:09.234860 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 03:42:09.234865 | orchestrator | Friday 13 February 2026 03:42:04 +0000 (0:00:00.532) 0:09:25.393 ******* 2026-02-13 03:42:09.234870 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:09.234876 | orchestrator | 2026-02-13 03:42:09.234881 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 03:42:09.234886 | orchestrator | Friday 13 February 2026 03:42:05 +0000 (0:00:00.754) 0:09:26.147 ******* 2026-02-13 03:42:09.234892 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.234897 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:09.234902 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:09.234911 | orchestrator | 2026-02-13 03:42:09.234917 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 03:42:09.234922 | orchestrator | Friday 13 February 2026 03:42:05 +0000 (0:00:00.329) 0:09:26.477 ******* 2026-02-13 03:42:09.234928 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.234933 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.234938 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.234944 | orchestrator | 2026-02-13 03:42:09.234949 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 03:42:09.234954 | orchestrator | Friday 13 February 2026 03:42:06 +0000 (0:00:00.712) 0:09:27.190 ******* 2026-02-13 03:42:09.234959 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.234968 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.234974 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.234979 | orchestrator | 2026-02-13 03:42:09.234984 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 03:42:09.234990 | orchestrator | Friday 13 February 2026 03:42:07 +0000 (0:00:00.979) 0:09:28.170 ******* 2026-02-13 03:42:09.234995 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:09.235000 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:09.235005 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:09.235010 | orchestrator | 2026-02-13 03:42:09.235016 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 03:42:09.235021 | orchestrator | Friday 13 February 2026 03:42:07 +0000 (0:00:00.713) 0:09:28.883 ******* 2026-02-13 03:42:09.235026 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.235032 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:09.235037 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:09.235042 | orchestrator | 2026-02-13 03:42:09.235048 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 03:42:09.235053 | orchestrator | Friday 13 February 2026 03:42:08 +0000 (0:00:00.399) 0:09:29.282 ******* 2026-02-13 03:42:09.235059 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.235064 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:09.235069 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:09.235075 | orchestrator | 2026-02-13 03:42:09.235080 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 03:42:09.235085 | orchestrator | Friday 13 February 2026 03:42:08 +0000 (0:00:00.335) 0:09:29.617 ******* 2026-02-13 03:42:09.235091 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:09.235096 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:09.235101 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:09.235106 | orchestrator | 2026-02-13 03:42:09.235116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 03:42:30.375186 | orchestrator | Friday 13 February 2026 03:42:09 +0000 (0:00:00.619) 0:09:30.237 ******* 2026-02-13 03:42:30.375295 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.375308 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.375318 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.375329 | orchestrator | 2026-02-13 03:42:30.375339 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 03:42:30.375350 | orchestrator | Friday 13 February 2026 03:42:09 +0000 (0:00:00.772) 0:09:31.010 ******* 2026-02-13 03:42:30.375360 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.375369 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.375379 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.375406 | orchestrator | 2026-02-13 03:42:30.375426 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 03:42:30.375436 | orchestrator | Friday 13 February 2026 03:42:10 +0000 (0:00:00.767) 0:09:31.777 ******* 2026-02-13 03:42:30.375446 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.375456 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.375466 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.375476 | orchestrator | 2026-02-13 03:42:30.375532 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 03:42:30.375566 | orchestrator | Friday 13 February 2026 03:42:11 +0000 (0:00:00.336) 0:09:32.113 ******* 2026-02-13 03:42:30.375576 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.375586 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.375597 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.375606 | orchestrator | 2026-02-13 03:42:30.375616 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 03:42:30.375626 | orchestrator | Friday 13 February 2026 03:42:11 +0000 (0:00:00.555) 0:09:32.668 ******* 2026-02-13 03:42:30.375636 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.375646 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.375655 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.375665 | orchestrator | 2026-02-13 03:42:30.375674 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 03:42:30.375686 | orchestrator | Friday 13 February 2026 03:42:11 +0000 (0:00:00.336) 0:09:33.005 ******* 2026-02-13 03:42:30.375703 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.375719 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.375736 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.375753 | orchestrator | 2026-02-13 03:42:30.375770 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 03:42:30.375787 | orchestrator | Friday 13 February 2026 03:42:12 +0000 (0:00:00.349) 0:09:33.354 ******* 2026-02-13 03:42:30.375803 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.375819 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.375836 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.375852 | orchestrator | 2026-02-13 03:42:30.375868 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 03:42:30.375884 | orchestrator | Friday 13 February 2026 03:42:12 +0000 (0:00:00.329) 0:09:33.683 ******* 2026-02-13 03:42:30.375901 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.375917 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.375932 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.375948 | orchestrator | 2026-02-13 03:42:30.375965 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 03:42:30.375982 | orchestrator | Friday 13 February 2026 03:42:13 +0000 (0:00:00.591) 0:09:34.275 ******* 2026-02-13 03:42:30.375997 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.376013 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.376033 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.376063 | orchestrator | 2026-02-13 03:42:30.376080 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 03:42:30.376099 | orchestrator | Friday 13 February 2026 03:42:13 +0000 (0:00:00.348) 0:09:34.624 ******* 2026-02-13 03:42:30.376116 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.376131 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.376146 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.376163 | orchestrator | 2026-02-13 03:42:30.376180 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 03:42:30.376196 | orchestrator | Friday 13 February 2026 03:42:13 +0000 (0:00:00.313) 0:09:34.937 ******* 2026-02-13 03:42:30.376212 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.376228 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.376242 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.376257 | orchestrator | 2026-02-13 03:42:30.376294 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 03:42:30.376311 | orchestrator | Friday 13 February 2026 03:42:14 +0000 (0:00:00.352) 0:09:35.290 ******* 2026-02-13 03:42:30.376327 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:42:30.376343 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:42:30.376359 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:42:30.376375 | orchestrator | 2026-02-13 03:42:30.376392 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-13 03:42:30.376408 | orchestrator | Friday 13 February 2026 03:42:15 +0000 (0:00:00.818) 0:09:36.108 ******* 2026-02-13 03:42:30.376442 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:30.376460 | orchestrator | 2026-02-13 03:42:30.376518 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-13 03:42:30.376532 | orchestrator | Friday 13 February 2026 03:42:15 +0000 (0:00:00.536) 0:09:36.645 ******* 2026-02-13 03:42:30.376541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.376551 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:42:30.376561 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:42:30.376571 | orchestrator | 2026-02-13 03:42:30.376580 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-13 03:42:30.376590 | orchestrator | Friday 13 February 2026 03:42:17 +0000 (0:00:02.333) 0:09:38.978 ******* 2026-02-13 03:42:30.376599 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 03:42:30.376610 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-13 03:42:30.376619 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:30.376655 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 03:42:30.376666 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-13 03:42:30.376676 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:30.376685 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 03:42:30.376695 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-13 03:42:30.376704 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:30.376714 | orchestrator | 2026-02-13 03:42:30.376723 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-13 03:42:30.376733 | orchestrator | Friday 13 February 2026 03:42:19 +0000 (0:00:01.489) 0:09:40.467 ******* 2026-02-13 03:42:30.376743 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:42:30.376752 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:42:30.376762 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:42:30.376771 | orchestrator | 2026-02-13 03:42:30.376781 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-13 03:42:30.376790 | orchestrator | Friday 13 February 2026 03:42:19 +0000 (0:00:00.338) 0:09:40.806 ******* 2026-02-13 03:42:30.376800 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:42:30.376810 | orchestrator | 2026-02-13 03:42:30.376819 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-13 03:42:30.376829 | orchestrator | Friday 13 February 2026 03:42:20 +0000 (0:00:00.549) 0:09:41.355 ******* 2026-02-13 03:42:30.376840 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-13 03:42:30.376852 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-13 03:42:30.376862 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-13 03:42:30.376871 | orchestrator | 2026-02-13 03:42:30.376881 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-13 03:42:30.376891 | orchestrator | Friday 13 February 2026 03:42:21 +0000 (0:00:01.138) 0:09:42.493 ******* 2026-02-13 03:42:30.376900 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.376911 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-13 03:42:30.376920 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.376930 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-13 03:42:30.376948 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.376957 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-13 03:42:30.376967 | orchestrator | 2026-02-13 03:42:30.376977 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-13 03:42:30.376986 | orchestrator | Friday 13 February 2026 03:42:25 +0000 (0:00:04.288) 0:09:46.782 ******* 2026-02-13 03:42:30.376996 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.377005 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:42:30.377015 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.377025 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:42:30.377034 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:42:30.377051 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:42:30.377061 | orchestrator | 2026-02-13 03:42:30.377070 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-13 03:42:30.377080 | orchestrator | Friday 13 February 2026 03:42:28 +0000 (0:00:02.249) 0:09:49.032 ******* 2026-02-13 03:42:30.377090 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 03:42:30.377099 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:42:30.377109 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 03:42:30.377118 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:42:30.377128 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 03:42:30.377137 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:42:30.377147 | orchestrator | 2026-02-13 03:42:30.377157 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-13 03:42:30.377170 | orchestrator | Friday 13 February 2026 03:42:29 +0000 (0:00:01.485) 0:09:50.517 ******* 2026-02-13 03:42:30.377186 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-13 03:42:30.377201 | orchestrator | 2026-02-13 03:42:30.377218 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-13 03:42:30.377234 | orchestrator | Friday 13 February 2026 03:42:29 +0000 (0:00:00.233) 0:09:50.751 ******* 2026-02-13 03:42:30.377250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:42:30.377267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:42:30.377294 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.382972 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383094 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383110 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.383125 | orchestrator | 2026-02-13 03:43:14.383146 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-13 03:43:14.383166 | orchestrator | Friday 13 February 2026 03:42:30 +0000 (0:00:00.632) 0:09:51.384 ******* 2026-02-13 03:43:14.383184 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 03:43:14.383306 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.383324 | orchestrator | 2026-02-13 03:43:14.383344 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-13 03:43:14.383363 | orchestrator | Friday 13 February 2026 03:42:30 +0000 (0:00:00.610) 0:09:51.995 ******* 2026-02-13 03:43:14.383381 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-13 03:43:14.383395 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-13 03:43:14.383405 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-13 03:43:14.383416 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-13 03:43:14.383427 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-13 03:43:14.383437 | orchestrator | 2026-02-13 03:43:14.383448 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-13 03:43:14.383459 | orchestrator | Friday 13 February 2026 03:43:01 +0000 (0:00:30.931) 0:10:22.926 ******* 2026-02-13 03:43:14.383469 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.383480 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:14.383521 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:14.383534 | orchestrator | 2026-02-13 03:43:14.383547 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-13 03:43:14.383559 | orchestrator | Friday 13 February 2026 03:43:02 +0000 (0:00:00.348) 0:10:23.275 ******* 2026-02-13 03:43:14.383572 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.383585 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:14.383597 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:14.383609 | orchestrator | 2026-02-13 03:43:14.383638 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-13 03:43:14.383651 | orchestrator | Friday 13 February 2026 03:43:02 +0000 (0:00:00.338) 0:10:23.614 ******* 2026-02-13 03:43:14.383663 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:43:14.383675 | orchestrator | 2026-02-13 03:43:14.383688 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-13 03:43:14.383700 | orchestrator | Friday 13 February 2026 03:43:03 +0000 (0:00:00.815) 0:10:24.429 ******* 2026-02-13 03:43:14.383712 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:43:14.383725 | orchestrator | 2026-02-13 03:43:14.383738 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-13 03:43:14.383750 | orchestrator | Friday 13 February 2026 03:43:03 +0000 (0:00:00.532) 0:10:24.961 ******* 2026-02-13 03:43:14.383761 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:43:14.383772 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:43:14.383783 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:43:14.383793 | orchestrator | 2026-02-13 03:43:14.383804 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-13 03:43:14.383815 | orchestrator | Friday 13 February 2026 03:43:05 +0000 (0:00:01.569) 0:10:26.530 ******* 2026-02-13 03:43:14.383836 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:43:14.383847 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:43:14.383858 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:43:14.383869 | orchestrator | 2026-02-13 03:43:14.383880 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-13 03:43:14.383890 | orchestrator | Friday 13 February 2026 03:43:06 +0000 (0:00:01.225) 0:10:27.756 ******* 2026-02-13 03:43:14.383902 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:43:14.383933 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:43:14.383945 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:43:14.383956 | orchestrator | 2026-02-13 03:43:14.383967 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-13 03:43:14.383994 | orchestrator | Friday 13 February 2026 03:43:08 +0000 (0:00:01.790) 0:10:29.547 ******* 2026-02-13 03:43:14.384005 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-13 03:43:14.384016 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-13 03:43:14.384027 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-13 03:43:14.384037 | orchestrator | 2026-02-13 03:43:14.384048 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-13 03:43:14.384059 | orchestrator | Friday 13 February 2026 03:43:11 +0000 (0:00:02.621) 0:10:32.168 ******* 2026-02-13 03:43:14.384080 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.384091 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:14.384102 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:14.384112 | orchestrator | 2026-02-13 03:43:14.384123 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-13 03:43:14.384134 | orchestrator | Friday 13 February 2026 03:43:11 +0000 (0:00:00.355) 0:10:32.524 ******* 2026-02-13 03:43:14.384145 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:43:14.384156 | orchestrator | 2026-02-13 03:43:14.384167 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-13 03:43:14.384177 | orchestrator | Friday 13 February 2026 03:43:12 +0000 (0:00:00.776) 0:10:33.301 ******* 2026-02-13 03:43:14.384188 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:14.384200 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:14.384210 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:14.384221 | orchestrator | 2026-02-13 03:43:14.384232 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-13 03:43:14.384242 | orchestrator | Friday 13 February 2026 03:43:12 +0000 (0:00:00.334) 0:10:33.636 ******* 2026-02-13 03:43:14.384253 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.384264 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:14.384274 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:14.384285 | orchestrator | 2026-02-13 03:43:14.384295 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-13 03:43:14.384306 | orchestrator | Friday 13 February 2026 03:43:12 +0000 (0:00:00.320) 0:10:33.957 ******* 2026-02-13 03:43:14.384317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:43:14.384329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:43:14.384340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:43:14.384350 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:14.384361 | orchestrator | 2026-02-13 03:43:14.384372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-13 03:43:14.384383 | orchestrator | Friday 13 February 2026 03:43:13 +0000 (0:00:00.894) 0:10:34.852 ******* 2026-02-13 03:43:14.384393 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:14.384404 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:14.384424 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:14.384435 | orchestrator | 2026-02-13 03:43:14.384446 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:43:14.384457 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-13 03:43:14.384475 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-13 03:43:14.384486 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-13 03:43:14.384516 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-13 03:43:14.384527 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-13 03:43:14.384537 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-13 03:43:14.384548 | orchestrator | 2026-02-13 03:43:14.384559 | orchestrator | 2026-02-13 03:43:14.384570 | orchestrator | 2026-02-13 03:43:14.384580 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:43:14.384591 | orchestrator | Friday 13 February 2026 03:43:14 +0000 (0:00:00.518) 0:10:35.370 ******* 2026-02-13 03:43:14.384602 | orchestrator | =============================================================================== 2026-02-13 03:43:14.384612 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.77s 2026-02-13 03:43:14.384623 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.06s 2026-02-13 03:43:14.384633 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.93s 2026-02-13 03:43:14.384644 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 24.15s 2026-02-13 03:43:14.384659 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-02-13 03:43:14.384683 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.47s 2026-02-13 03:43:14.747392 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.43s 2026-02-13 03:43:14.747538 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.29s 2026-02-13 03:43:14.747561 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.32s 2026-02-13 03:43:14.747581 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.20s 2026-02-13 03:43:14.747600 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.66s 2026-02-13 03:43:14.747620 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.29s 2026-02-13 03:43:14.747637 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.09s 2026-02-13 03:43:14.747653 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.29s 2026-02-13 03:43:14.747670 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.83s 2026-02-13 03:43:14.747687 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.76s 2026-02-13 03:43:14.747704 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.57s 2026-02-13 03:43:14.747720 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.44s 2026-02-13 03:43:14.747737 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.32s 2026-02-13 03:43:14.747756 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.07s 2026-02-13 03:43:17.116480 | orchestrator | 2026-02-13 03:43:17 | INFO  | Task 1d14bf46-2001-4860-86b4-2d106f1e8baa (ceph-pools) was prepared for execution. 2026-02-13 03:43:17.116662 | orchestrator | 2026-02-13 03:43:17 | INFO  | It takes a moment until task 1d14bf46-2001-4860-86b4-2d106f1e8baa (ceph-pools) has been started and output is visible here. 2026-02-13 03:43:31.132097 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 03:43:31.132222 | orchestrator | 2.16.14 2026-02-13 03:43:31.132240 | orchestrator | 2026-02-13 03:43:31.132252 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-13 03:43:31.132265 | orchestrator | 2026-02-13 03:43:31.132276 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 03:43:31.132287 | orchestrator | Friday 13 February 2026 03:43:21 +0000 (0:00:00.608) 0:00:00.608 ******* 2026-02-13 03:43:31.132298 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:43:31.132310 | orchestrator | 2026-02-13 03:43:31.132321 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 03:43:31.132331 | orchestrator | Friday 13 February 2026 03:43:22 +0000 (0:00:00.627) 0:00:01.235 ******* 2026-02-13 03:43:31.132342 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132353 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132364 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132374 | orchestrator | 2026-02-13 03:43:31.132385 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 03:43:31.132396 | orchestrator | Friday 13 February 2026 03:43:22 +0000 (0:00:00.632) 0:00:01.867 ******* 2026-02-13 03:43:31.132407 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132418 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132429 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132439 | orchestrator | 2026-02-13 03:43:31.132450 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 03:43:31.132460 | orchestrator | Friday 13 February 2026 03:43:23 +0000 (0:00:00.293) 0:00:02.161 ******* 2026-02-13 03:43:31.132471 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132481 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132492 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132537 | orchestrator | 2026-02-13 03:43:31.132566 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 03:43:31.132578 | orchestrator | Friday 13 February 2026 03:43:23 +0000 (0:00:00.856) 0:00:03.017 ******* 2026-02-13 03:43:31.132588 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132600 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132612 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132624 | orchestrator | 2026-02-13 03:43:31.132636 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 03:43:31.132648 | orchestrator | Friday 13 February 2026 03:43:24 +0000 (0:00:00.311) 0:00:03.329 ******* 2026-02-13 03:43:31.132660 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132672 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132685 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132697 | orchestrator | 2026-02-13 03:43:31.132709 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 03:43:31.132722 | orchestrator | Friday 13 February 2026 03:43:24 +0000 (0:00:00.312) 0:00:03.641 ******* 2026-02-13 03:43:31.132734 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132746 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132757 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132770 | orchestrator | 2026-02-13 03:43:31.132783 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 03:43:31.132801 | orchestrator | Friday 13 February 2026 03:43:24 +0000 (0:00:00.346) 0:00:03.987 ******* 2026-02-13 03:43:31.132821 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:31.132838 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:31.132857 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:31.132869 | orchestrator | 2026-02-13 03:43:31.132881 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 03:43:31.132916 | orchestrator | Friday 13 February 2026 03:43:25 +0000 (0:00:00.498) 0:00:04.486 ******* 2026-02-13 03:43:31.132928 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.132941 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.132952 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.132963 | orchestrator | 2026-02-13 03:43:31.132974 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 03:43:31.132985 | orchestrator | Friday 13 February 2026 03:43:25 +0000 (0:00:00.301) 0:00:04.787 ******* 2026-02-13 03:43:31.132995 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:43:31.133006 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:43:31.133017 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:43:31.133027 | orchestrator | 2026-02-13 03:43:31.133038 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 03:43:31.133048 | orchestrator | Friday 13 February 2026 03:43:26 +0000 (0:00:00.653) 0:00:05.441 ******* 2026-02-13 03:43:31.133059 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:31.133069 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:31.133079 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:31.133090 | orchestrator | 2026-02-13 03:43:31.133100 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 03:43:31.133111 | orchestrator | Friday 13 February 2026 03:43:26 +0000 (0:00:00.460) 0:00:05.901 ******* 2026-02-13 03:43:31.133121 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:43:31.133132 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:43:31.133142 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:43:31.133153 | orchestrator | 2026-02-13 03:43:31.133164 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 03:43:31.133174 | orchestrator | Friday 13 February 2026 03:43:29 +0000 (0:00:02.235) 0:00:08.136 ******* 2026-02-13 03:43:31.133185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 03:43:31.133197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 03:43:31.133207 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 03:43:31.133218 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:31.133229 | orchestrator | 2026-02-13 03:43:31.133257 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 03:43:31.133269 | orchestrator | Friday 13 February 2026 03:43:29 +0000 (0:00:00.633) 0:00:08.770 ******* 2026-02-13 03:43:31.133282 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133296 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133307 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133318 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:31.133328 | orchestrator | 2026-02-13 03:43:31.133339 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 03:43:31.133350 | orchestrator | Friday 13 February 2026 03:43:30 +0000 (0:00:01.065) 0:00:09.835 ******* 2026-02-13 03:43:31.133368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133391 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:31.133414 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:31.133425 | orchestrator | 2026-02-13 03:43:31.133436 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 03:43:31.133446 | orchestrator | Friday 13 February 2026 03:43:30 +0000 (0:00:00.185) 0:00:10.020 ******* 2026-02-13 03:43:31.133459 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9a39aafafb69', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 03:43:27.727075', 'end': '2026-02-13 03:43:27.771499', 'delta': '0:00:00.044424', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a39aafafb69'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-13 03:43:31.133473 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b8f8955ec790', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 03:43:28.271104', 'end': '2026-02-13 03:43:28.323236', 'delta': '0:00:00.052132', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8f8955ec790'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-13 03:43:31.133494 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '30f78d02966b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 03:43:28.847318', 'end': '2026-02-13 03:43:28.896228', 'delta': '0:00:00.048910', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['30f78d02966b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-13 03:43:37.928427 | orchestrator | 2026-02-13 03:43:37.928588 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 03:43:37.928608 | orchestrator | Friday 13 February 2026 03:43:31 +0000 (0:00:00.189) 0:00:10.210 ******* 2026-02-13 03:43:37.928645 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:37.928658 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:37.928669 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:37.928680 | orchestrator | 2026-02-13 03:43:37.928691 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 03:43:37.928702 | orchestrator | Friday 13 February 2026 03:43:31 +0000 (0:00:00.456) 0:00:10.666 ******* 2026-02-13 03:43:37.928714 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-13 03:43:37.928725 | orchestrator | 2026-02-13 03:43:37.928750 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 03:43:37.928762 | orchestrator | Friday 13 February 2026 03:43:33 +0000 (0:00:01.654) 0:00:12.321 ******* 2026-02-13 03:43:37.928773 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.928784 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.928794 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.928805 | orchestrator | 2026-02-13 03:43:37.928816 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 03:43:37.928827 | orchestrator | Friday 13 February 2026 03:43:33 +0000 (0:00:00.298) 0:00:12.620 ******* 2026-02-13 03:43:37.928837 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.928848 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.928859 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.928869 | orchestrator | 2026-02-13 03:43:37.928880 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 03:43:37.928891 | orchestrator | Friday 13 February 2026 03:43:34 +0000 (0:00:00.863) 0:00:13.484 ******* 2026-02-13 03:43:37.928901 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.928912 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.928923 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.928934 | orchestrator | 2026-02-13 03:43:37.928946 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 03:43:37.928956 | orchestrator | Friday 13 February 2026 03:43:34 +0000 (0:00:00.295) 0:00:13.780 ******* 2026-02-13 03:43:37.928969 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:37.928982 | orchestrator | 2026-02-13 03:43:37.928994 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 03:43:37.929007 | orchestrator | Friday 13 February 2026 03:43:34 +0000 (0:00:00.134) 0:00:13.914 ******* 2026-02-13 03:43:37.929019 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929032 | orchestrator | 2026-02-13 03:43:37.929044 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 03:43:37.929056 | orchestrator | Friday 13 February 2026 03:43:35 +0000 (0:00:00.247) 0:00:14.161 ******* 2026-02-13 03:43:37.929069 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929082 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929095 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929107 | orchestrator | 2026-02-13 03:43:37.929119 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 03:43:37.929131 | orchestrator | Friday 13 February 2026 03:43:35 +0000 (0:00:00.292) 0:00:14.454 ******* 2026-02-13 03:43:37.929144 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929156 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929168 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929180 | orchestrator | 2026-02-13 03:43:37.929193 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 03:43:37.929205 | orchestrator | Friday 13 February 2026 03:43:35 +0000 (0:00:00.313) 0:00:14.767 ******* 2026-02-13 03:43:37.929218 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929230 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929242 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929255 | orchestrator | 2026-02-13 03:43:37.929267 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 03:43:37.929279 | orchestrator | Friday 13 February 2026 03:43:36 +0000 (0:00:00.519) 0:00:15.287 ******* 2026-02-13 03:43:37.929301 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929312 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929322 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929334 | orchestrator | 2026-02-13 03:43:37.929345 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 03:43:37.929356 | orchestrator | Friday 13 February 2026 03:43:36 +0000 (0:00:00.346) 0:00:15.633 ******* 2026-02-13 03:43:37.929367 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929377 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929388 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929399 | orchestrator | 2026-02-13 03:43:37.929410 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 03:43:37.929421 | orchestrator | Friday 13 February 2026 03:43:36 +0000 (0:00:00.318) 0:00:15.952 ******* 2026-02-13 03:43:37.929432 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929443 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929453 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929464 | orchestrator | 2026-02-13 03:43:37.929475 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 03:43:37.929487 | orchestrator | Friday 13 February 2026 03:43:37 +0000 (0:00:00.527) 0:00:16.479 ******* 2026-02-13 03:43:37.929515 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:37.929527 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:37.929538 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:37.929548 | orchestrator | 2026-02-13 03:43:37.929559 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 03:43:37.929570 | orchestrator | Friday 13 February 2026 03:43:37 +0000 (0:00:00.322) 0:00:16.802 ******* 2026-02-13 03:43:37.929601 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929623 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929691 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:37.929732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.038373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.038403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.038481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.038594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.038621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038633 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.038672 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233631 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.233660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.233676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.233686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.233701 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:38.233712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.233722 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:38.233730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.233763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485551 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-13 03:43:38.485635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.485666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.485679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.485690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.485702 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-13 03:43:38.485713 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:38.485725 | orchestrator | 2026-02-13 03:43:38.485736 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 03:43:38.485747 | orchestrator | Friday 13 February 2026 03:43:38 +0000 (0:00:00.664) 0:00:17.467 ******* 2026-02-13 03:43:38.485765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606851 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606944 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606970 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606981 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.606992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.607046 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.607058 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.607069 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.607083 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.607114 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.744953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745047 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745059 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745152 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745177 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745192 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745208 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745222 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745262 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:38.745288 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.745312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869641 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869746 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869843 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869855 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869868 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869887 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:38.869917 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:38.869952 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.013958 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014119 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014184 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014244 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014353 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:39.014382 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:49.082442 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:49.082754 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-13-02-25-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-13 03:43:49.082856 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.082890 | orchestrator | 2026-02-13 03:43:49.082917 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 03:43:49.082943 | orchestrator | Friday 13 February 2026 03:43:39 +0000 (0:00:00.633) 0:00:18.100 ******* 2026-02-13 03:43:49.082968 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:49.082991 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:49.083010 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:49.083026 | orchestrator | 2026-02-13 03:43:49.083042 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 03:43:49.083059 | orchestrator | Friday 13 February 2026 03:43:39 +0000 (0:00:00.898) 0:00:18.999 ******* 2026-02-13 03:43:49.083077 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:49.083094 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:49.083111 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:49.083129 | orchestrator | 2026-02-13 03:43:49.083147 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 03:43:49.083166 | orchestrator | Friday 13 February 2026 03:43:40 +0000 (0:00:00.324) 0:00:19.324 ******* 2026-02-13 03:43:49.083180 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:49.083191 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:49.083202 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:49.083213 | orchestrator | 2026-02-13 03:43:49.083241 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 03:43:49.083252 | orchestrator | Friday 13 February 2026 03:43:40 +0000 (0:00:00.650) 0:00:19.974 ******* 2026-02-13 03:43:49.083263 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083274 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083285 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083296 | orchestrator | 2026-02-13 03:43:49.083307 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 03:43:49.083318 | orchestrator | Friday 13 February 2026 03:43:41 +0000 (0:00:00.331) 0:00:20.306 ******* 2026-02-13 03:43:49.083329 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083339 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083350 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083361 | orchestrator | 2026-02-13 03:43:49.083372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 03:43:49.083382 | orchestrator | Friday 13 February 2026 03:43:41 +0000 (0:00:00.703) 0:00:21.009 ******* 2026-02-13 03:43:49.083393 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083404 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083414 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083425 | orchestrator | 2026-02-13 03:43:49.083436 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 03:43:49.083447 | orchestrator | Friday 13 February 2026 03:43:42 +0000 (0:00:00.324) 0:00:21.333 ******* 2026-02-13 03:43:49.083457 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-13 03:43:49.083469 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-13 03:43:49.083480 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-13 03:43:49.083491 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-13 03:43:49.083501 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-13 03:43:49.083537 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-13 03:43:49.083548 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-13 03:43:49.083573 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-13 03:43:49.083583 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-13 03:43:49.083595 | orchestrator | 2026-02-13 03:43:49.083606 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 03:43:49.083618 | orchestrator | Friday 13 February 2026 03:43:43 +0000 (0:00:01.022) 0:00:22.356 ******* 2026-02-13 03:43:49.083650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 03:43:49.083662 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 03:43:49.083673 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 03:43:49.083683 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-13 03:43:49.083705 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-13 03:43:49.083715 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-13 03:43:49.083726 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-13 03:43:49.083747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-13 03:43:49.083758 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-13 03:43:49.083769 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083779 | orchestrator | 2026-02-13 03:43:49.083790 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 03:43:49.083801 | orchestrator | Friday 13 February 2026 03:43:43 +0000 (0:00:00.398) 0:00:22.754 ******* 2026-02-13 03:43:49.083812 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:43:49.083824 | orchestrator | 2026-02-13 03:43:49.083834 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 03:43:49.083847 | orchestrator | Friday 13 February 2026 03:43:44 +0000 (0:00:00.751) 0:00:23.506 ******* 2026-02-13 03:43:49.083858 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083868 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083879 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083890 | orchestrator | 2026-02-13 03:43:49.083900 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 03:43:49.083911 | orchestrator | Friday 13 February 2026 03:43:44 +0000 (0:00:00.333) 0:00:23.840 ******* 2026-02-13 03:43:49.083922 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083932 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.083943 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.083954 | orchestrator | 2026-02-13 03:43:49.083964 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 03:43:49.083975 | orchestrator | Friday 13 February 2026 03:43:45 +0000 (0:00:00.323) 0:00:24.164 ******* 2026-02-13 03:43:49.083986 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.083997 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:43:49.084007 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:43:49.084018 | orchestrator | 2026-02-13 03:43:49.084029 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 03:43:49.084040 | orchestrator | Friday 13 February 2026 03:43:45 +0000 (0:00:00.525) 0:00:24.689 ******* 2026-02-13 03:43:49.084050 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:49.084061 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:49.084072 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:49.084083 | orchestrator | 2026-02-13 03:43:49.084093 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 03:43:49.084104 | orchestrator | Friday 13 February 2026 03:43:46 +0000 (0:00:00.418) 0:00:25.108 ******* 2026-02-13 03:43:49.084115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:43:49.084133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:43:49.084150 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:43:49.084161 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.084171 | orchestrator | 2026-02-13 03:43:49.084182 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 03:43:49.084193 | orchestrator | Friday 13 February 2026 03:43:46 +0000 (0:00:00.380) 0:00:25.488 ******* 2026-02-13 03:43:49.084204 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:43:49.084215 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:43:49.084225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:43:49.084236 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.084247 | orchestrator | 2026-02-13 03:43:49.084258 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 03:43:49.084268 | orchestrator | Friday 13 February 2026 03:43:46 +0000 (0:00:00.389) 0:00:25.878 ******* 2026-02-13 03:43:49.084279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 03:43:49.084289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 03:43:49.084300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 03:43:49.084311 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:43:49.084321 | orchestrator | 2026-02-13 03:43:49.084332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 03:43:49.084342 | orchestrator | Friday 13 February 2026 03:43:47 +0000 (0:00:00.379) 0:00:26.257 ******* 2026-02-13 03:43:49.084353 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:43:49.084364 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:43:49.084374 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:43:49.084385 | orchestrator | 2026-02-13 03:43:49.084396 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 03:43:49.084406 | orchestrator | Friday 13 February 2026 03:43:47 +0000 (0:00:00.317) 0:00:26.575 ******* 2026-02-13 03:43:49.084417 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 03:43:49.084428 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-13 03:43:49.084438 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-13 03:43:49.084449 | orchestrator | 2026-02-13 03:43:49.084460 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 03:43:49.084470 | orchestrator | Friday 13 February 2026 03:43:48 +0000 (0:00:00.751) 0:00:27.327 ******* 2026-02-13 03:43:49.084481 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:43:49.084499 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:45:28.598611 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:45:28.598761 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-13 03:45:28.598791 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 03:45:28.598813 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 03:45:28.598835 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 03:45:28.598856 | orchestrator | 2026-02-13 03:45:28.598873 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 03:45:28.598885 | orchestrator | Friday 13 February 2026 03:43:49 +0000 (0:00:00.842) 0:00:28.169 ******* 2026-02-13 03:45:28.598896 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-13 03:45:28.598907 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 03:45:28.598918 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 03:45:28.598928 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-13 03:45:28.598979 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 03:45:28.599000 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 03:45:28.599018 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 03:45:28.599037 | orchestrator | 2026-02-13 03:45:28.599057 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-13 03:45:28.599076 | orchestrator | Friday 13 February 2026 03:43:50 +0000 (0:00:01.686) 0:00:29.855 ******* 2026-02-13 03:45:28.599095 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:45:28.599115 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:45:28.599131 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-13 03:45:28.599144 | orchestrator | 2026-02-13 03:45:28.599156 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-13 03:45:28.599169 | orchestrator | Friday 13 February 2026 03:43:51 +0000 (0:00:00.389) 0:00:30.245 ******* 2026-02-13 03:45:28.599183 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:45:28.599198 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:45:28.599227 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:45:28.599240 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:45:28.599253 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-13 03:45:28.599266 | orchestrator | 2026-02-13 03:45:28.599279 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-13 03:45:28.599291 | orchestrator | Friday 13 February 2026 03:44:36 +0000 (0:00:45.704) 0:01:15.949 ******* 2026-02-13 03:45:28.599304 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599322 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599341 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599359 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599377 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599396 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599414 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-13 03:45:28.599435 | orchestrator | 2026-02-13 03:45:28.599454 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-13 03:45:28.599473 | orchestrator | Friday 13 February 2026 03:45:00 +0000 (0:00:23.287) 0:01:39.237 ******* 2026-02-13 03:45:28.599505 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599581 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599607 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599625 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599643 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599661 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599679 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-13 03:45:28.599698 | orchestrator | 2026-02-13 03:45:28.599717 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-13 03:45:28.599735 | orchestrator | Friday 13 February 2026 03:45:11 +0000 (0:00:11.385) 0:01:50.623 ******* 2026-02-13 03:45:28.599752 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599771 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599791 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.599809 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599828 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599848 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.599866 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599884 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599895 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.599906 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599917 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599928 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.599938 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599949 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599960 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.599970 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-13 03:45:28.599981 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 03:45:28.599991 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 03:45:28.600002 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-13 03:45:28.600013 | orchestrator | 2026-02-13 03:45:28.600023 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:45:28.600043 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-13 03:45:28.600056 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-13 03:45:28.600067 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-13 03:45:28.600078 | orchestrator | 2026-02-13 03:45:28.600089 | orchestrator | 2026-02-13 03:45:28.600099 | orchestrator | 2026-02-13 03:45:28.600110 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:45:28.600121 | orchestrator | Friday 13 February 2026 03:45:28 +0000 (0:00:17.021) 0:02:07.644 ******* 2026-02-13 03:45:28.600131 | orchestrator | =============================================================================== 2026-02-13 03:45:28.600153 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.70s 2026-02-13 03:45:28.600163 | orchestrator | generate keys ---------------------------------------------------------- 23.29s 2026-02-13 03:45:28.600174 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.02s 2026-02-13 03:45:28.600184 | orchestrator | get keys from monitors ------------------------------------------------- 11.39s 2026-02-13 03:45:28.600195 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.24s 2026-02-13 03:45:28.600206 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.69s 2026-02-13 03:45:28.600216 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.66s 2026-02-13 03:45:28.600227 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.07s 2026-02-13 03:45:28.600238 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.02s 2026-02-13 03:45:28.600248 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.90s 2026-02-13 03:45:28.600259 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.86s 2026-02-13 03:45:28.600269 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.86s 2026-02-13 03:45:28.600280 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.84s 2026-02-13 03:45:28.600302 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.75s 2026-02-13 03:45:28.906518 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.75s 2026-02-13 03:45:28.906656 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.70s 2026-02-13 03:45:28.906671 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.66s 2026-02-13 03:45:28.906682 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.65s 2026-02-13 03:45:28.906693 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2026-02-13 03:45:28.906704 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.63s 2026-02-13 03:45:31.165968 | orchestrator | 2026-02-13 03:45:31 | INFO  | Task 2f78bf5c-3483-4ec9-bd2b-3a5702b034ad (copy-ceph-keys) was prepared for execution. 2026-02-13 03:45:31.166091 | orchestrator | 2026-02-13 03:45:31 | INFO  | It takes a moment until task 2f78bf5c-3483-4ec9-bd2b-3a5702b034ad (copy-ceph-keys) has been started and output is visible here. 2026-02-13 03:46:07.953294 | orchestrator | 2026-02-13 03:46:07.953418 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-13 03:46:07.953436 | orchestrator | 2026-02-13 03:46:07.953449 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-13 03:46:07.953461 | orchestrator | Friday 13 February 2026 03:45:35 +0000 (0:00:00.160) 0:00:00.160 ******* 2026-02-13 03:46:07.953472 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-13 03:46:07.953485 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953496 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953506 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 03:46:07.953517 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953528 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-13 03:46:07.953539 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-13 03:46:07.953606 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-13 03:46:07.953647 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-13 03:46:07.953663 | orchestrator | 2026-02-13 03:46:07.953680 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-13 03:46:07.953697 | orchestrator | Friday 13 February 2026 03:45:39 +0000 (0:00:04.552) 0:00:04.712 ******* 2026-02-13 03:46:07.953713 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-13 03:46:07.953747 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953763 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953778 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 03:46:07.953795 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953811 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-13 03:46:07.953828 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-13 03:46:07.953846 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-13 03:46:07.953863 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-13 03:46:07.953876 | orchestrator | 2026-02-13 03:46:07.953889 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-13 03:46:07.953900 | orchestrator | Friday 13 February 2026 03:45:44 +0000 (0:00:04.121) 0:00:08.834 ******* 2026-02-13 03:46:07.953913 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-13 03:46:07.953924 | orchestrator | 2026-02-13 03:46:07.953936 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-13 03:46:07.953947 | orchestrator | Friday 13 February 2026 03:45:44 +0000 (0:00:00.938) 0:00:09.772 ******* 2026-02-13 03:46:07.953959 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-13 03:46:07.953970 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953982 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.953994 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 03:46:07.954005 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.954069 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-13 03:46:07.954082 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-13 03:46:07.954093 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-13 03:46:07.954103 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-13 03:46:07.954114 | orchestrator | 2026-02-13 03:46:07.954126 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-13 03:46:07.954137 | orchestrator | Friday 13 February 2026 03:45:57 +0000 (0:00:12.899) 0:00:22.672 ******* 2026-02-13 03:46:07.954148 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-13 03:46:07.954160 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-13 03:46:07.954172 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-13 03:46:07.954183 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-13 03:46:07.954211 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-13 03:46:07.954232 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-13 03:46:07.954242 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-13 03:46:07.954251 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-13 03:46:07.954261 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-13 03:46:07.954270 | orchestrator | 2026-02-13 03:46:07.954280 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-13 03:46:07.954289 | orchestrator | Friday 13 February 2026 03:46:00 +0000 (0:00:03.049) 0:00:25.722 ******* 2026-02-13 03:46:07.954301 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-13 03:46:07.954318 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.954334 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.954350 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 03:46:07.954365 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-13 03:46:07.954380 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-13 03:46:07.954396 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-13 03:46:07.954414 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-13 03:46:07.954429 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-13 03:46:07.954444 | orchestrator | 2026-02-13 03:46:07.954470 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:46:07.954498 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:46:07.954510 | orchestrator | 2026-02-13 03:46:07.954519 | orchestrator | 2026-02-13 03:46:07.954529 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:46:07.954539 | orchestrator | Friday 13 February 2026 03:46:07 +0000 (0:00:06.747) 0:00:32.469 ******* 2026-02-13 03:46:07.954548 | orchestrator | =============================================================================== 2026-02-13 03:46:07.954607 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.90s 2026-02-13 03:46:07.954617 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.75s 2026-02-13 03:46:07.954626 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.55s 2026-02-13 03:46:07.954636 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.12s 2026-02-13 03:46:07.954645 | orchestrator | Check if target directories exist --------------------------------------- 3.05s 2026-02-13 03:46:07.954655 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2026-02-13 03:46:20.414003 | orchestrator | 2026-02-13 03:46:20 | INFO  | Task 531558a7-2689-4f14-80a4-d2d91b5b2339 (cephclient) was prepared for execution. 2026-02-13 03:46:20.414167 | orchestrator | 2026-02-13 03:46:20 | INFO  | It takes a moment until task 531558a7-2689-4f14-80a4-d2d91b5b2339 (cephclient) has been started and output is visible here. 2026-02-13 03:47:19.113203 | orchestrator | 2026-02-13 03:47:19.113322 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-13 03:47:19.113340 | orchestrator | 2026-02-13 03:47:19.113352 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-13 03:47:19.113364 | orchestrator | Friday 13 February 2026 03:46:24 +0000 (0:00:00.227) 0:00:00.227 ******* 2026-02-13 03:47:19.113376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-13 03:47:19.113413 | orchestrator | 2026-02-13 03:47:19.113425 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-13 03:47:19.113436 | orchestrator | Friday 13 February 2026 03:46:24 +0000 (0:00:00.229) 0:00:00.457 ******* 2026-02-13 03:47:19.113447 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-13 03:47:19.113458 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-13 03:47:19.113470 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-13 03:47:19.113482 | orchestrator | 2026-02-13 03:47:19.113493 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-13 03:47:19.113504 | orchestrator | Friday 13 February 2026 03:46:26 +0000 (0:00:01.224) 0:00:01.682 ******* 2026-02-13 03:47:19.113515 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-13 03:47:19.113526 | orchestrator | 2026-02-13 03:47:19.113536 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-13 03:47:19.113547 | orchestrator | Friday 13 February 2026 03:46:27 +0000 (0:00:01.454) 0:00:03.136 ******* 2026-02-13 03:47:19.113558 | orchestrator | changed: [testbed-manager] 2026-02-13 03:47:19.113570 | orchestrator | 2026-02-13 03:47:19.113647 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-13 03:47:19.113666 | orchestrator | Friday 13 February 2026 03:46:28 +0000 (0:00:00.942) 0:00:04.079 ******* 2026-02-13 03:47:19.113685 | orchestrator | changed: [testbed-manager] 2026-02-13 03:47:19.113697 | orchestrator | 2026-02-13 03:47:19.113707 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-13 03:47:19.113718 | orchestrator | Friday 13 February 2026 03:46:29 +0000 (0:00:00.927) 0:00:05.006 ******* 2026-02-13 03:47:19.113729 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-13 03:47:19.113742 | orchestrator | ok: [testbed-manager] 2026-02-13 03:47:19.113756 | orchestrator | 2026-02-13 03:47:19.113769 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-13 03:47:19.113781 | orchestrator | Friday 13 February 2026 03:47:10 +0000 (0:00:41.002) 0:00:46.009 ******* 2026-02-13 03:47:19.113794 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-13 03:47:19.113807 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-13 03:47:19.113825 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-13 03:47:19.113844 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-13 03:47:19.113869 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-13 03:47:19.113916 | orchestrator | 2026-02-13 03:47:19.113950 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-13 03:47:19.113967 | orchestrator | Friday 13 February 2026 03:47:14 +0000 (0:00:03.672) 0:00:49.681 ******* 2026-02-13 03:47:19.113986 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-13 03:47:19.114002 | orchestrator | 2026-02-13 03:47:19.114089 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-13 03:47:19.114112 | orchestrator | Friday 13 February 2026 03:47:14 +0000 (0:00:00.380) 0:00:50.062 ******* 2026-02-13 03:47:19.114130 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:47:19.114148 | orchestrator | 2026-02-13 03:47:19.114166 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-13 03:47:19.114184 | orchestrator | Friday 13 February 2026 03:47:14 +0000 (0:00:00.136) 0:00:50.198 ******* 2026-02-13 03:47:19.114201 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:47:19.114219 | orchestrator | 2026-02-13 03:47:19.114238 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-13 03:47:19.114256 | orchestrator | Friday 13 February 2026 03:47:14 +0000 (0:00:00.417) 0:00:50.616 ******* 2026-02-13 03:47:19.114295 | orchestrator | changed: [testbed-manager] 2026-02-13 03:47:19.114315 | orchestrator | 2026-02-13 03:47:19.114329 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-13 03:47:19.114359 | orchestrator | Friday 13 February 2026 03:47:16 +0000 (0:00:01.429) 0:00:52.045 ******* 2026-02-13 03:47:19.114371 | orchestrator | changed: [testbed-manager] 2026-02-13 03:47:19.114382 | orchestrator | 2026-02-13 03:47:19.114392 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-13 03:47:19.114403 | orchestrator | Friday 13 February 2026 03:47:17 +0000 (0:00:00.633) 0:00:52.678 ******* 2026-02-13 03:47:19.114414 | orchestrator | changed: [testbed-manager] 2026-02-13 03:47:19.114424 | orchestrator | 2026-02-13 03:47:19.114435 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-13 03:47:19.114446 | orchestrator | Friday 13 February 2026 03:47:17 +0000 (0:00:00.531) 0:00:53.209 ******* 2026-02-13 03:47:19.114456 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-13 03:47:19.114467 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-13 03:47:19.114478 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-13 03:47:19.114489 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-13 03:47:19.114499 | orchestrator | 2026-02-13 03:47:19.114511 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:47:19.114523 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 03:47:19.114534 | orchestrator | 2026-02-13 03:47:19.114545 | orchestrator | 2026-02-13 03:47:19.114628 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:47:19.114643 | orchestrator | Friday 13 February 2026 03:47:18 +0000 (0:00:01.321) 0:00:54.531 ******* 2026-02-13 03:47:19.114654 | orchestrator | =============================================================================== 2026-02-13 03:47:19.114666 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.00s 2026-02-13 03:47:19.114678 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.67s 2026-02-13 03:47:19.114689 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.45s 2026-02-13 03:47:19.114701 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.43s 2026-02-13 03:47:19.114713 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.32s 2026-02-13 03:47:19.114724 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2026-02-13 03:47:19.114736 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.94s 2026-02-13 03:47:19.114748 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.93s 2026-02-13 03:47:19.114759 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.63s 2026-02-13 03:47:19.114771 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2026-02-13 03:47:19.114783 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.42s 2026-02-13 03:47:19.114795 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.38s 2026-02-13 03:47:19.114806 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.23s 2026-02-13 03:47:19.114818 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2026-02-13 03:47:21.002252 | orchestrator | 2026-02-13 03:47:20 | INFO  | Task e2282466-efe1-4f86-8793-cc853099973c (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-13 03:47:21.002351 | orchestrator | 2026-02-13 03:47:20 | INFO  | It takes a moment until task e2282466-efe1-4f86-8793-cc853099973c (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-13 03:48:45.011086 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 03:48:45.011188 | orchestrator | 2.16.14 2026-02-13 03:48:45.011204 | orchestrator | 2026-02-13 03:48:45.011217 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-13 03:48:45.011229 | orchestrator | 2026-02-13 03:48:45.011240 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-13 03:48:45.011275 | orchestrator | Friday 13 February 2026 03:47:25 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-13 03:48:45.011287 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011298 | orchestrator | 2026-02-13 03:48:45.011309 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-13 03:48:45.011320 | orchestrator | Friday 13 February 2026 03:47:27 +0000 (0:00:01.895) 0:00:02.176 ******* 2026-02-13 03:48:45.011331 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011342 | orchestrator | 2026-02-13 03:48:45.011352 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-13 03:48:45.011363 | orchestrator | Friday 13 February 2026 03:47:28 +0000 (0:00:01.020) 0:00:03.196 ******* 2026-02-13 03:48:45.011374 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011385 | orchestrator | 2026-02-13 03:48:45.011395 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-13 03:48:45.011406 | orchestrator | Friday 13 February 2026 03:47:29 +0000 (0:00:01.037) 0:00:04.234 ******* 2026-02-13 03:48:45.011417 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011428 | orchestrator | 2026-02-13 03:48:45.011438 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-13 03:48:45.011449 | orchestrator | Friday 13 February 2026 03:47:30 +0000 (0:00:01.200) 0:00:05.435 ******* 2026-02-13 03:48:45.011460 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011470 | orchestrator | 2026-02-13 03:48:45.011481 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-13 03:48:45.011492 | orchestrator | Friday 13 February 2026 03:47:31 +0000 (0:00:01.054) 0:00:06.489 ******* 2026-02-13 03:48:45.011514 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011526 | orchestrator | 2026-02-13 03:48:45.011537 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-13 03:48:45.011548 | orchestrator | Friday 13 February 2026 03:47:32 +0000 (0:00:01.074) 0:00:07.564 ******* 2026-02-13 03:48:45.011558 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011569 | orchestrator | 2026-02-13 03:48:45.011580 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-13 03:48:45.011591 | orchestrator | Friday 13 February 2026 03:47:34 +0000 (0:00:02.061) 0:00:09.626 ******* 2026-02-13 03:48:45.011660 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011675 | orchestrator | 2026-02-13 03:48:45.011688 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-13 03:48:45.011701 | orchestrator | Friday 13 February 2026 03:47:35 +0000 (0:00:01.169) 0:00:10.795 ******* 2026-02-13 03:48:45.011713 | orchestrator | changed: [testbed-manager] 2026-02-13 03:48:45.011726 | orchestrator | 2026-02-13 03:48:45.011739 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-13 03:48:45.011752 | orchestrator | Friday 13 February 2026 03:48:18 +0000 (0:00:43.219) 0:00:54.015 ******* 2026-02-13 03:48:45.011764 | orchestrator | skipping: [testbed-manager] 2026-02-13 03:48:45.011776 | orchestrator | 2026-02-13 03:48:45.011789 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-13 03:48:45.011801 | orchestrator | 2026-02-13 03:48:45.011814 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-13 03:48:45.011827 | orchestrator | Friday 13 February 2026 03:48:19 +0000 (0:00:00.208) 0:00:54.223 ******* 2026-02-13 03:48:45.011839 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:48:45.011851 | orchestrator | 2026-02-13 03:48:45.011864 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-13 03:48:45.011876 | orchestrator | 2026-02-13 03:48:45.011889 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-13 03:48:45.011902 | orchestrator | Friday 13 February 2026 03:48:30 +0000 (0:00:11.769) 0:01:05.993 ******* 2026-02-13 03:48:45.011914 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:48:45.011926 | orchestrator | 2026-02-13 03:48:45.011940 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-13 03:48:45.011960 | orchestrator | 2026-02-13 03:48:45.011973 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-13 03:48:45.011987 | orchestrator | Friday 13 February 2026 03:48:33 +0000 (0:00:02.345) 0:01:08.339 ******* 2026-02-13 03:48:45.011998 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:48:45.012009 | orchestrator | 2026-02-13 03:48:45.012020 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:48:45.012032 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 03:48:45.012043 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:48:45.012054 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:48:45.012065 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 03:48:45.012076 | orchestrator | 2026-02-13 03:48:45.012087 | orchestrator | 2026-02-13 03:48:45.012097 | orchestrator | 2026-02-13 03:48:45.012108 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:48:45.012119 | orchestrator | Friday 13 February 2026 03:48:44 +0000 (0:00:11.401) 0:01:19.741 ******* 2026-02-13 03:48:45.012129 | orchestrator | =============================================================================== 2026-02-13 03:48:45.012140 | orchestrator | Create admin user ------------------------------------------------------ 43.22s 2026-02-13 03:48:45.012167 | orchestrator | Restart ceph manager service ------------------------------------------- 25.52s 2026-02-13 03:48:45.012178 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.06s 2026-02-13 03:48:45.012189 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.90s 2026-02-13 03:48:45.012200 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.20s 2026-02-13 03:48:45.012210 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.17s 2026-02-13 03:48:45.012221 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.07s 2026-02-13 03:48:45.012232 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.05s 2026-02-13 03:48:45.012242 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.04s 2026-02-13 03:48:45.012253 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2026-02-13 03:48:45.012263 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-02-13 03:48:45.293558 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-13 03:48:47.256242 | orchestrator | 2026-02-13 03:48:47 | INFO  | Task f7ae258d-50eb-4126-9411-f0359eb8c35b (keystone) was prepared for execution. 2026-02-13 03:48:47.256309 | orchestrator | 2026-02-13 03:48:47 | INFO  | It takes a moment until task f7ae258d-50eb-4126-9411-f0359eb8c35b (keystone) has been started and output is visible here. 2026-02-13 03:48:54.229682 | orchestrator | 2026-02-13 03:48:54.229821 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:48:54.229857 | orchestrator | 2026-02-13 03:48:54.229882 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:48:54.229908 | orchestrator | Friday 13 February 2026 03:48:51 +0000 (0:00:00.263) 0:00:00.263 ******* 2026-02-13 03:48:54.229920 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:48:54.229931 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:48:54.229942 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:48:54.229956 | orchestrator | 2026-02-13 03:48:54.229974 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:48:54.229993 | orchestrator | Friday 13 February 2026 03:48:51 +0000 (0:00:00.326) 0:00:00.589 ******* 2026-02-13 03:48:54.230108 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-13 03:48:54.230137 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-13 03:48:54.230157 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-13 03:48:54.230177 | orchestrator | 2026-02-13 03:48:54.230196 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-13 03:48:54.230213 | orchestrator | 2026-02-13 03:48:54.230226 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:48:54.230238 | orchestrator | Friday 13 February 2026 03:48:52 +0000 (0:00:00.438) 0:00:01.028 ******* 2026-02-13 03:48:54.230251 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:48:54.230267 | orchestrator | 2026-02-13 03:48:54.230287 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-13 03:48:54.230305 | orchestrator | Friday 13 February 2026 03:48:52 +0000 (0:00:00.571) 0:00:01.600 ******* 2026-02-13 03:48:54.230330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:48:54.230355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:48:54.230414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:48:54.230455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230489 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:48:54.230547 | orchestrator | 2026-02-13 03:48:54.230559 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-13 03:48:54.230578 | orchestrator | Friday 13 February 2026 03:48:54 +0000 (0:00:01.518) 0:00:03.118 ******* 2026-02-13 03:49:00.109490 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:00.109601 | orchestrator | 2026-02-13 03:49:00.109673 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-13 03:49:00.109703 | orchestrator | Friday 13 February 2026 03:48:54 +0000 (0:00:00.296) 0:00:03.415 ******* 2026-02-13 03:49:00.109715 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:00.109726 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:00.109737 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:00.109748 | orchestrator | 2026-02-13 03:49:00.109760 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-13 03:49:00.109771 | orchestrator | Friday 13 February 2026 03:48:54 +0000 (0:00:00.324) 0:00:03.739 ******* 2026-02-13 03:49:00.109782 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:49:00.109793 | orchestrator | 2026-02-13 03:49:00.109804 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:49:00.109815 | orchestrator | Friday 13 February 2026 03:48:55 +0000 (0:00:00.834) 0:00:04.574 ******* 2026-02-13 03:49:00.109827 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:49:00.109838 | orchestrator | 2026-02-13 03:49:00.109849 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-13 03:49:00.109860 | orchestrator | Friday 13 February 2026 03:48:56 +0000 (0:00:00.521) 0:00:05.096 ******* 2026-02-13 03:49:00.109877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:00.109893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:00.109906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:00.109968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:00.109985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:00.109997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:00.110008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:00.110080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:00.110104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:00.110117 | orchestrator | 2026-02-13 03:49:00.110130 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-13 03:49:00.110143 | orchestrator | Friday 13 February 2026 03:48:59 +0000 (0:00:03.300) 0:00:08.397 ******* 2026-02-13 03:49:00.110167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:00.882128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:00.882287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:00.882303 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:00.882313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:00.882337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:00.882348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:00.882355 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:00.882376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:00.882383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:00.882390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:00.882401 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:00.882408 | orchestrator | 2026-02-13 03:49:00.882416 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-13 03:49:00.882424 | orchestrator | Friday 13 February 2026 03:49:00 +0000 (0:00:00.611) 0:00:09.008 ******* 2026-02-13 03:49:00.882431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:00.882441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:00.882454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:04.388990 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:04.389150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:04.389196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:04.389253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:04.389275 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:04.389311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:04.389330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:04.389372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:04.389390 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:04.389407 | orchestrator | 2026-02-13 03:49:04.389422 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-13 03:49:04.389433 | orchestrator | Friday 13 February 2026 03:49:00 +0000 (0:00:00.768) 0:00:09.776 ******* 2026-02-13 03:49:04.389444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:04.389466 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:04.389492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:04.389533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:09.315344 | orchestrator | 2026-02-13 03:49:09.315355 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-13 03:49:09.315365 | orchestrator | Friday 13 February 2026 03:49:04 +0000 (0:00:03.506) 0:00:13.283 ******* 2026-02-13 03:49:09.315391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:09.315402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:09.315420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:09.315430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:09.315443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:09.315460 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:12.888000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:12.888117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:12.888130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:49:12.888140 | orchestrator | 2026-02-13 03:49:12.888151 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-13 03:49:12.888161 | orchestrator | Friday 13 February 2026 03:49:09 +0000 (0:00:04.928) 0:00:18.211 ******* 2026-02-13 03:49:12.888170 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:49:12.888180 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:49:12.888189 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:49:12.888197 | orchestrator | 2026-02-13 03:49:12.888213 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-13 03:49:12.888227 | orchestrator | Friday 13 February 2026 03:49:10 +0000 (0:00:01.440) 0:00:19.652 ******* 2026-02-13 03:49:12.888241 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:12.888256 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:12.888270 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:12.888284 | orchestrator | 2026-02-13 03:49:12.888297 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-13 03:49:12.888312 | orchestrator | Friday 13 February 2026 03:49:11 +0000 (0:00:00.760) 0:00:20.412 ******* 2026-02-13 03:49:12.888326 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:12.888340 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:12.888355 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:12.888368 | orchestrator | 2026-02-13 03:49:12.888409 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-13 03:49:12.888425 | orchestrator | Friday 13 February 2026 03:49:12 +0000 (0:00:00.501) 0:00:20.914 ******* 2026-02-13 03:49:12.888434 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:12.888442 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:12.888454 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:12.888467 | orchestrator | 2026-02-13 03:49:12.888483 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-13 03:49:12.888497 | orchestrator | Friday 13 February 2026 03:49:12 +0000 (0:00:00.287) 0:00:21.201 ******* 2026-02-13 03:49:12.888533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:12.888555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:12.888567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:12.888578 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:12.888589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:12.888606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:12.888643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:12.888663 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:12.888682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-13 03:49:31.780241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 03:49:31.780365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 03:49:31.780383 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:31.780397 | orchestrator | 2026-02-13 03:49:31.780411 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:49:31.780423 | orchestrator | Friday 13 February 2026 03:49:12 +0000 (0:00:00.577) 0:00:21.779 ******* 2026-02-13 03:49:31.780434 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:31.780445 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:31.780456 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:31.780467 | orchestrator | 2026-02-13 03:49:31.780478 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-13 03:49:31.780489 | orchestrator | Friday 13 February 2026 03:49:13 +0000 (0:00:00.294) 0:00:22.074 ******* 2026-02-13 03:49:31.780500 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-13 03:49:31.780512 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-13 03:49:31.780549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-13 03:49:31.780561 | orchestrator | 2026-02-13 03:49:31.780588 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-13 03:49:31.780599 | orchestrator | Friday 13 February 2026 03:49:14 +0000 (0:00:01.813) 0:00:23.887 ******* 2026-02-13 03:49:31.780610 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:49:31.780673 | orchestrator | 2026-02-13 03:49:31.780685 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-13 03:49:31.780696 | orchestrator | Friday 13 February 2026 03:49:15 +0000 (0:00:00.917) 0:00:24.804 ******* 2026-02-13 03:49:31.780707 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:49:31.780718 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:49:31.780729 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:49:31.780740 | orchestrator | 2026-02-13 03:49:31.780751 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-13 03:49:31.780762 | orchestrator | Friday 13 February 2026 03:49:16 +0000 (0:00:00.532) 0:00:25.337 ******* 2026-02-13 03:49:31.780775 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:49:31.780788 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 03:49:31.780801 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 03:49:31.780814 | orchestrator | 2026-02-13 03:49:31.780826 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-13 03:49:31.780840 | orchestrator | Friday 13 February 2026 03:49:17 +0000 (0:00:01.075) 0:00:26.413 ******* 2026-02-13 03:49:31.780852 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:49:31.780866 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:49:31.780878 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:49:31.780890 | orchestrator | 2026-02-13 03:49:31.780903 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-13 03:49:31.780915 | orchestrator | Friday 13 February 2026 03:49:17 +0000 (0:00:00.494) 0:00:26.907 ******* 2026-02-13 03:49:31.780929 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-13 03:49:31.780942 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-13 03:49:31.780955 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-13 03:49:31.780967 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-13 03:49:31.780980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-13 03:49:31.780992 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-13 03:49:31.781005 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-13 03:49:31.781018 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-13 03:49:31.781049 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-13 03:49:31.781062 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-13 03:49:31.781074 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-13 03:49:31.781086 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-13 03:49:31.781098 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-13 03:49:31.781111 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-13 03:49:31.781123 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-13 03:49:31.781135 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 03:49:31.781155 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 03:49:31.781166 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 03:49:31.781177 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 03:49:31.781188 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 03:49:31.781198 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 03:49:31.781209 | orchestrator | 2026-02-13 03:49:31.781220 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-13 03:49:31.781231 | orchestrator | Friday 13 February 2026 03:49:26 +0000 (0:00:08.718) 0:00:35.626 ******* 2026-02-13 03:49:31.781241 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 03:49:31.781252 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 03:49:31.781263 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 03:49:31.781273 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 03:49:31.781284 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 03:49:31.781295 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 03:49:31.781306 | orchestrator | 2026-02-13 03:49:31.781317 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-13 03:49:31.781333 | orchestrator | Friday 13 February 2026 03:49:29 +0000 (0:00:02.689) 0:00:38.316 ******* 2026-02-13 03:49:31.781347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:49:31.781369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:51:16.352099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-13 03:51:16.352261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-13 03:51:16.352434 | orchestrator | 2026-02-13 03:51:16.352448 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:51:16.352473 | orchestrator | Friday 13 February 2026 03:49:31 +0000 (0:00:02.354) 0:00:40.671 ******* 2026-02-13 03:51:16.352484 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:51:16.352497 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:51:16.352507 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:51:16.352518 | orchestrator | 2026-02-13 03:51:16.352530 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-13 03:51:16.352541 | orchestrator | Friday 13 February 2026 03:49:32 +0000 (0:00:00.489) 0:00:41.160 ******* 2026-02-13 03:51:16.352551 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.352562 | orchestrator | 2026-02-13 03:51:16.352573 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-13 03:51:16.352584 | orchestrator | Friday 13 February 2026 03:49:34 +0000 (0:00:02.200) 0:00:43.361 ******* 2026-02-13 03:51:16.352595 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.352605 | orchestrator | 2026-02-13 03:51:16.352616 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-13 03:51:16.352629 | orchestrator | Friday 13 February 2026 03:49:36 +0000 (0:00:02.143) 0:00:45.505 ******* 2026-02-13 03:51:16.352642 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:51:16.352655 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:51:16.352690 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:51:16.352703 | orchestrator | 2026-02-13 03:51:16.352715 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-13 03:51:16.352728 | orchestrator | Friday 13 February 2026 03:49:37 +0000 (0:00:00.834) 0:00:46.339 ******* 2026-02-13 03:51:16.352740 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:51:16.352752 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:51:16.352764 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:51:16.352776 | orchestrator | 2026-02-13 03:51:16.352789 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-13 03:51:16.352806 | orchestrator | Friday 13 February 2026 03:49:37 +0000 (0:00:00.339) 0:00:46.679 ******* 2026-02-13 03:51:16.352817 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:51:16.352829 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:51:16.352840 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:51:16.352850 | orchestrator | 2026-02-13 03:51:16.352861 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-13 03:51:16.352872 | orchestrator | Friday 13 February 2026 03:49:38 +0000 (0:00:00.576) 0:00:47.255 ******* 2026-02-13 03:51:16.352882 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.352893 | orchestrator | 2026-02-13 03:51:16.352904 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-13 03:51:16.352915 | orchestrator | Friday 13 February 2026 03:49:53 +0000 (0:00:14.751) 0:01:02.007 ******* 2026-02-13 03:51:16.352925 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.352936 | orchestrator | 2026-02-13 03:51:16.352947 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-13 03:51:16.352958 | orchestrator | Friday 13 February 2026 03:50:03 +0000 (0:00:10.230) 0:01:12.237 ******* 2026-02-13 03:51:16.352976 | orchestrator | 2026-02-13 03:51:16.352987 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-13 03:51:16.352998 | orchestrator | Friday 13 February 2026 03:50:03 +0000 (0:00:00.081) 0:01:12.318 ******* 2026-02-13 03:51:16.353009 | orchestrator | 2026-02-13 03:51:16.353020 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-13 03:51:16.353030 | orchestrator | Friday 13 February 2026 03:50:03 +0000 (0:00:00.068) 0:01:12.386 ******* 2026-02-13 03:51:16.353041 | orchestrator | 2026-02-13 03:51:16.353051 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-13 03:51:16.353062 | orchestrator | Friday 13 February 2026 03:50:03 +0000 (0:00:00.071) 0:01:12.458 ******* 2026-02-13 03:51:16.353073 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.353084 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:51:16.353094 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:51:16.353105 | orchestrator | 2026-02-13 03:51:16.353116 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-13 03:51:16.353126 | orchestrator | Friday 13 February 2026 03:50:53 +0000 (0:00:49.713) 0:02:02.171 ******* 2026-02-13 03:51:16.353137 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.353148 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:51:16.353159 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:51:16.353169 | orchestrator | 2026-02-13 03:51:16.353180 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-13 03:51:16.353191 | orchestrator | Friday 13 February 2026 03:51:03 +0000 (0:00:09.920) 0:02:12.092 ******* 2026-02-13 03:51:16.353202 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:51:16.353212 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:51:16.353223 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:51:16.353234 | orchestrator | 2026-02-13 03:51:16.353245 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:51:16.353256 | orchestrator | Friday 13 February 2026 03:51:15 +0000 (0:00:12.544) 0:02:24.636 ******* 2026-02-13 03:51:16.353274 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:52:06.988241 | orchestrator | 2026-02-13 03:52:06.988368 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-13 03:52:06.988385 | orchestrator | Friday 13 February 2026 03:51:16 +0000 (0:00:00.607) 0:02:25.244 ******* 2026-02-13 03:52:06.988397 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:52:06.988411 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:52:06.988422 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:52:06.988433 | orchestrator | 2026-02-13 03:52:06.988445 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-13 03:52:06.988456 | orchestrator | Friday 13 February 2026 03:51:17 +0000 (0:00:01.204) 0:02:26.448 ******* 2026-02-13 03:52:06.988468 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:52:06.988480 | orchestrator | 2026-02-13 03:52:06.988491 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-13 03:52:06.988502 | orchestrator | Friday 13 February 2026 03:51:19 +0000 (0:00:01.846) 0:02:28.295 ******* 2026-02-13 03:52:06.988513 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-13 03:52:06.988524 | orchestrator | 2026-02-13 03:52:06.988536 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-13 03:52:06.988547 | orchestrator | Friday 13 February 2026 03:51:30 +0000 (0:00:11.205) 0:02:39.501 ******* 2026-02-13 03:52:06.988558 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-13 03:52:06.988569 | orchestrator | 2026-02-13 03:52:06.988580 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-13 03:52:06.988591 | orchestrator | Friday 13 February 2026 03:51:55 +0000 (0:00:24.703) 0:03:04.205 ******* 2026-02-13 03:52:06.988601 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-13 03:52:06.988639 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-13 03:52:06.988651 | orchestrator | 2026-02-13 03:52:06.988662 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-13 03:52:06.988673 | orchestrator | Friday 13 February 2026 03:52:02 +0000 (0:00:06.738) 0:03:10.943 ******* 2026-02-13 03:52:06.988734 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:06.988746 | orchestrator | 2026-02-13 03:52:06.988759 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-13 03:52:06.988777 | orchestrator | Friday 13 February 2026 03:52:02 +0000 (0:00:00.136) 0:03:11.079 ******* 2026-02-13 03:52:06.988796 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:06.988814 | orchestrator | 2026-02-13 03:52:06.988833 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-13 03:52:06.988851 | orchestrator | Friday 13 February 2026 03:52:02 +0000 (0:00:00.134) 0:03:11.214 ******* 2026-02-13 03:52:06.988870 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:06.988888 | orchestrator | 2026-02-13 03:52:06.988925 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-13 03:52:06.988944 | orchestrator | Friday 13 February 2026 03:52:02 +0000 (0:00:00.142) 0:03:11.356 ******* 2026-02-13 03:52:06.988964 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:06.988982 | orchestrator | 2026-02-13 03:52:06.989000 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-13 03:52:06.989019 | orchestrator | Friday 13 February 2026 03:52:02 +0000 (0:00:00.517) 0:03:11.874 ******* 2026-02-13 03:52:06.989037 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:52:06.989055 | orchestrator | 2026-02-13 03:52:06.989074 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-13 03:52:06.989093 | orchestrator | Friday 13 February 2026 03:52:06 +0000 (0:00:03.152) 0:03:15.026 ******* 2026-02-13 03:52:06.989111 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:06.989130 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:06.989148 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:06.989167 | orchestrator | 2026-02-13 03:52:06.989186 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:52:06.989207 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 03:52:06.989228 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 03:52:06.989247 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 03:52:06.989263 | orchestrator | 2026-02-13 03:52:06.989275 | orchestrator | 2026-02-13 03:52:06.989287 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:52:06.989297 | orchestrator | Friday 13 February 2026 03:52:06 +0000 (0:00:00.477) 0:03:15.504 ******* 2026-02-13 03:52:06.989308 | orchestrator | =============================================================================== 2026-02-13 03:52:06.989319 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 49.71s 2026-02-13 03:52:06.989329 | orchestrator | service-ks-register : keystone | Creating services --------------------- 24.70s 2026-02-13 03:52:06.989340 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.75s 2026-02-13 03:52:06.989350 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.54s 2026-02-13 03:52:06.989361 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.21s 2026-02-13 03:52:06.989371 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.23s 2026-02-13 03:52:06.989382 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.92s 2026-02-13 03:52:06.989393 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.72s 2026-02-13 03:52:06.989416 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.74s 2026-02-13 03:52:06.989447 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.93s 2026-02-13 03:52:06.989458 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.51s 2026-02-13 03:52:06.989469 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.30s 2026-02-13 03:52:06.989480 | orchestrator | keystone : Creating default user role ----------------------------------- 3.15s 2026-02-13 03:52:06.989490 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.69s 2026-02-13 03:52:06.989501 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2026-02-13 03:52:06.989512 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2026-02-13 03:52:06.989522 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.14s 2026-02-13 03:52:06.989533 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2026-02-13 03:52:06.989544 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.81s 2026-02-13 03:52:06.989555 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.52s 2026-02-13 03:52:09.152566 | orchestrator | 2026-02-13 03:52:09 | INFO  | Task 741e1b0d-9fca-4aa3-8b35-c216352fdf19 (placement) was prepared for execution. 2026-02-13 03:52:09.152711 | orchestrator | 2026-02-13 03:52:09 | INFO  | It takes a moment until task 741e1b0d-9fca-4aa3-8b35-c216352fdf19 (placement) has been started and output is visible here. 2026-02-13 03:52:43.190529 | orchestrator | 2026-02-13 03:52:43.190645 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:52:43.190662 | orchestrator | 2026-02-13 03:52:43.190673 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:52:43.190683 | orchestrator | Friday 13 February 2026 03:52:13 +0000 (0:00:00.272) 0:00:00.272 ******* 2026-02-13 03:52:43.190743 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:52:43.190763 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:52:43.190781 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:52:43.190792 | orchestrator | 2026-02-13 03:52:43.190802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:52:43.190812 | orchestrator | Friday 13 February 2026 03:52:13 +0000 (0:00:00.306) 0:00:00.579 ******* 2026-02-13 03:52:43.190823 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-13 03:52:43.190834 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-13 03:52:43.190843 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-13 03:52:43.190853 | orchestrator | 2026-02-13 03:52:43.190879 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-13 03:52:43.190889 | orchestrator | 2026-02-13 03:52:43.190899 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-13 03:52:43.190909 | orchestrator | Friday 13 February 2026 03:52:14 +0000 (0:00:00.484) 0:00:01.063 ******* 2026-02-13 03:52:43.190919 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:52:43.190929 | orchestrator | 2026-02-13 03:52:43.190939 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-13 03:52:43.190948 | orchestrator | Friday 13 February 2026 03:52:14 +0000 (0:00:00.555) 0:00:01.619 ******* 2026-02-13 03:52:43.190958 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-13 03:52:43.190968 | orchestrator | 2026-02-13 03:52:43.190978 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-13 03:52:43.190987 | orchestrator | Friday 13 February 2026 03:52:18 +0000 (0:00:03.786) 0:00:05.405 ******* 2026-02-13 03:52:43.190996 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-13 03:52:43.191033 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-13 03:52:43.191049 | orchestrator | 2026-02-13 03:52:43.191065 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-13 03:52:43.191081 | orchestrator | Friday 13 February 2026 03:52:24 +0000 (0:00:06.501) 0:00:11.907 ******* 2026-02-13 03:52:43.191098 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-13 03:52:43.191114 | orchestrator | 2026-02-13 03:52:43.191131 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-13 03:52:43.191147 | orchestrator | Friday 13 February 2026 03:52:28 +0000 (0:00:03.639) 0:00:15.547 ******* 2026-02-13 03:52:43.191165 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 03:52:43.191182 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-13 03:52:43.191199 | orchestrator | 2026-02-13 03:52:43.191213 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-13 03:52:43.191224 | orchestrator | Friday 13 February 2026 03:52:32 +0000 (0:00:03.937) 0:00:19.484 ******* 2026-02-13 03:52:43.191235 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 03:52:43.191251 | orchestrator | 2026-02-13 03:52:43.191267 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-13 03:52:43.191278 | orchestrator | Friday 13 February 2026 03:52:35 +0000 (0:00:03.159) 0:00:22.644 ******* 2026-02-13 03:52:43.191290 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-13 03:52:43.191300 | orchestrator | 2026-02-13 03:52:43.191311 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-13 03:52:43.191323 | orchestrator | Friday 13 February 2026 03:52:39 +0000 (0:00:03.656) 0:00:26.300 ******* 2026-02-13 03:52:43.191333 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:43.191345 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:43.191356 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:43.191367 | orchestrator | 2026-02-13 03:52:43.191378 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-13 03:52:43.191388 | orchestrator | Friday 13 February 2026 03:52:39 +0000 (0:00:00.275) 0:00:26.576 ******* 2026-02-13 03:52:43.191402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:43.191446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:43.191468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:43.191479 | orchestrator | 2026-02-13 03:52:43.191489 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-13 03:52:43.191499 | orchestrator | Friday 13 February 2026 03:52:40 +0000 (0:00:00.875) 0:00:27.452 ******* 2026-02-13 03:52:43.191509 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:43.191518 | orchestrator | 2026-02-13 03:52:43.191528 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-13 03:52:43.191538 | orchestrator | Friday 13 February 2026 03:52:40 +0000 (0:00:00.324) 0:00:27.776 ******* 2026-02-13 03:52:43.191547 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:43.191557 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:43.191566 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:43.191576 | orchestrator | 2026-02-13 03:52:43.191585 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-13 03:52:43.191595 | orchestrator | Friday 13 February 2026 03:52:41 +0000 (0:00:00.297) 0:00:28.074 ******* 2026-02-13 03:52:43.191605 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:52:43.191614 | orchestrator | 2026-02-13 03:52:43.191624 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-13 03:52:43.191633 | orchestrator | Friday 13 February 2026 03:52:41 +0000 (0:00:00.525) 0:00:28.599 ******* 2026-02-13 03:52:43.191643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:43.191663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:45.833340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:45.833470 | orchestrator | 2026-02-13 03:52:45.833498 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-13 03:52:45.833516 | orchestrator | Friday 13 February 2026 03:52:43 +0000 (0:00:01.630) 0:00:30.230 ******* 2026-02-13 03:52:45.833535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833555 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:45.833576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833594 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:45.833613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833664 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:45.833678 | orchestrator | 2026-02-13 03:52:45.833721 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-13 03:52:45.833756 | orchestrator | Friday 13 February 2026 03:52:43 +0000 (0:00:00.482) 0:00:30.713 ******* 2026-02-13 03:52:45.833777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833792 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:45.833812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833830 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:45.833850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:45.833869 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:45.833887 | orchestrator | 2026-02-13 03:52:45.833904 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-13 03:52:45.833921 | orchestrator | Friday 13 February 2026 03:52:44 +0000 (0:00:00.714) 0:00:31.427 ******* 2026-02-13 03:52:45.833940 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:45.833996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:52.541802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:52.541910 | orchestrator | 2026-02-13 03:52:52.541925 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-13 03:52:52.541933 | orchestrator | Friday 13 February 2026 03:52:45 +0000 (0:00:01.452) 0:00:32.880 ******* 2026-02-13 03:52:52.541940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:52.541948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:52.541988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:52:52.541995 | orchestrator | 2026-02-13 03:52:52.542001 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-13 03:52:52.542007 | orchestrator | Friday 13 February 2026 03:52:47 +0000 (0:00:02.164) 0:00:35.045 ******* 2026-02-13 03:52:52.542070 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-13 03:52:52.542079 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-13 03:52:52.542085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-13 03:52:52.542091 | orchestrator | 2026-02-13 03:52:52.542096 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-13 03:52:52.542102 | orchestrator | Friday 13 February 2026 03:52:49 +0000 (0:00:01.366) 0:00:36.411 ******* 2026-02-13 03:52:52.542108 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:52:52.542115 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:52:52.542122 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:52:52.542127 | orchestrator | 2026-02-13 03:52:52.542133 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-13 03:52:52.542139 | orchestrator | Friday 13 February 2026 03:52:50 +0000 (0:00:01.325) 0:00:37.737 ******* 2026-02-13 03:52:52.542145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:52.542151 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:52:52.542157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:52.542170 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:52:52.542176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-13 03:52:52.542182 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:52:52.542188 | orchestrator | 2026-02-13 03:52:52.542194 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-13 03:52:52.542204 | orchestrator | Friday 13 February 2026 03:52:51 +0000 (0:00:00.722) 0:00:38.459 ******* 2026-02-13 03:52:52.542217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:53:15.932678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:53:15.932864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-13 03:53:15.932879 | orchestrator | 2026-02-13 03:53:15.932888 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-13 03:53:15.932897 | orchestrator | Friday 13 February 2026 03:52:52 +0000 (0:00:01.132) 0:00:39.592 ******* 2026-02-13 03:53:15.932904 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:53:15.932912 | orchestrator | 2026-02-13 03:53:15.932919 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-13 03:53:15.932926 | orchestrator | Friday 13 February 2026 03:52:54 +0000 (0:00:02.158) 0:00:41.750 ******* 2026-02-13 03:53:15.932933 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:53:15.932940 | orchestrator | 2026-02-13 03:53:15.932947 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-13 03:53:15.932953 | orchestrator | Friday 13 February 2026 03:52:56 +0000 (0:00:02.108) 0:00:43.859 ******* 2026-02-13 03:53:15.932960 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:53:15.932965 | orchestrator | 2026-02-13 03:53:15.932972 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-13 03:53:15.932978 | orchestrator | Friday 13 February 2026 03:53:10 +0000 (0:00:13.266) 0:00:57.125 ******* 2026-02-13 03:53:15.932984 | orchestrator | 2026-02-13 03:53:15.932990 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-13 03:53:15.932997 | orchestrator | Friday 13 February 2026 03:53:10 +0000 (0:00:00.076) 0:00:57.202 ******* 2026-02-13 03:53:15.933004 | orchestrator | 2026-02-13 03:53:15.933011 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-13 03:53:15.933017 | orchestrator | Friday 13 February 2026 03:53:10 +0000 (0:00:00.066) 0:00:57.269 ******* 2026-02-13 03:53:15.933024 | orchestrator | 2026-02-13 03:53:15.933031 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-13 03:53:15.933036 | orchestrator | Friday 13 February 2026 03:53:10 +0000 (0:00:00.069) 0:00:57.338 ******* 2026-02-13 03:53:15.933043 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:53:15.933062 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:53:15.933068 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:53:15.933074 | orchestrator | 2026-02-13 03:53:15.933080 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:53:15.933087 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 03:53:15.933094 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:53:15.933100 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 03:53:15.933105 | orchestrator | 2026-02-13 03:53:15.933111 | orchestrator | 2026-02-13 03:53:15.933116 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:53:15.933122 | orchestrator | Friday 13 February 2026 03:53:15 +0000 (0:00:05.319) 0:01:02.658 ******* 2026-02-13 03:53:15.933134 | orchestrator | =============================================================================== 2026-02-13 03:53:15.933139 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.27s 2026-02-13 03:53:15.933160 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.50s 2026-02-13 03:53:15.933167 | orchestrator | placement : Restart placement-api container ----------------------------- 5.32s 2026-02-13 03:53:15.933172 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.94s 2026-02-13 03:53:15.933178 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.79s 2026-02-13 03:53:15.933183 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.66s 2026-02-13 03:53:15.933189 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.64s 2026-02-13 03:53:15.933194 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.16s 2026-02-13 03:53:15.933200 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.16s 2026-02-13 03:53:15.933206 | orchestrator | placement : Creating placement databases -------------------------------- 2.16s 2026-02-13 03:53:15.933213 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.11s 2026-02-13 03:53:15.933219 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.63s 2026-02-13 03:53:15.933225 | orchestrator | placement : Copying over config.json files for services ----------------- 1.45s 2026-02-13 03:53:15.933231 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.37s 2026-02-13 03:53:15.933237 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.33s 2026-02-13 03:53:15.933242 | orchestrator | placement : Check placement containers ---------------------------------- 1.13s 2026-02-13 03:53:15.933248 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.88s 2026-02-13 03:53:15.933254 | orchestrator | placement : Copying over existing policy file --------------------------- 0.72s 2026-02-13 03:53:15.933260 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.71s 2026-02-13 03:53:15.933266 | orchestrator | placement : include_tasks ----------------------------------------------- 0.56s 2026-02-13 03:53:18.278895 | orchestrator | 2026-02-13 03:53:18 | INFO  | Task f86acd54-7188-4373-baf3-184d155c6188 (neutron) was prepared for execution. 2026-02-13 03:53:18.279020 | orchestrator | 2026-02-13 03:53:18 | INFO  | It takes a moment until task f86acd54-7188-4373-baf3-184d155c6188 (neutron) has been started and output is visible here. 2026-02-13 03:54:05.489583 | orchestrator | 2026-02-13 03:54:05.489700 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:54:05.489718 | orchestrator | 2026-02-13 03:54:05.489812 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:54:05.489824 | orchestrator | Friday 13 February 2026 03:53:22 +0000 (0:00:00.253) 0:00:00.253 ******* 2026-02-13 03:54:05.489836 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:54:05.489848 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:54:05.489859 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:54:05.489870 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:54:05.489881 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:54:05.489892 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:54:05.489903 | orchestrator | 2026-02-13 03:54:05.489914 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:54:05.489925 | orchestrator | Friday 13 February 2026 03:53:22 +0000 (0:00:00.613) 0:00:00.866 ******* 2026-02-13 03:54:05.489936 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-13 03:54:05.489948 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-13 03:54:05.489959 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-13 03:54:05.489970 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-13 03:54:05.489981 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-13 03:54:05.490069 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-13 03:54:05.490083 | orchestrator | 2026-02-13 03:54:05.490094 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-13 03:54:05.490105 | orchestrator | 2026-02-13 03:54:05.490117 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-13 03:54:05.490131 | orchestrator | Friday 13 February 2026 03:53:23 +0000 (0:00:00.533) 0:00:01.400 ******* 2026-02-13 03:54:05.490158 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:54:05.490171 | orchestrator | 2026-02-13 03:54:05.490185 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-13 03:54:05.490198 | orchestrator | Friday 13 February 2026 03:53:24 +0000 (0:00:01.028) 0:00:02.429 ******* 2026-02-13 03:54:05.490210 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:54:05.490223 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:54:05.490236 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:54:05.490248 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:54:05.490261 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:54:05.490274 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:54:05.490286 | orchestrator | 2026-02-13 03:54:05.490296 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-13 03:54:05.490307 | orchestrator | Friday 13 February 2026 03:53:25 +0000 (0:00:01.127) 0:00:03.556 ******* 2026-02-13 03:54:05.490318 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:54:05.490329 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:54:05.490339 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:54:05.490350 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:54:05.490361 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:54:05.490371 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:54:05.490382 | orchestrator | 2026-02-13 03:54:05.490393 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-13 03:54:05.490404 | orchestrator | Friday 13 February 2026 03:53:26 +0000 (0:00:01.055) 0:00:04.611 ******* 2026-02-13 03:54:05.490415 | orchestrator | ok: [testbed-node-0] => { 2026-02-13 03:54:05.490426 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490437 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490448 | orchestrator | } 2026-02-13 03:54:05.490459 | orchestrator | ok: [testbed-node-1] => { 2026-02-13 03:54:05.490470 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490481 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490491 | orchestrator | } 2026-02-13 03:54:05.490502 | orchestrator | ok: [testbed-node-2] => { 2026-02-13 03:54:05.490513 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490523 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490534 | orchestrator | } 2026-02-13 03:54:05.490545 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 03:54:05.490556 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490566 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490577 | orchestrator | } 2026-02-13 03:54:05.490588 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 03:54:05.490598 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490610 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490620 | orchestrator | } 2026-02-13 03:54:05.490631 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 03:54:05.490642 | orchestrator |  "changed": false, 2026-02-13 03:54:05.490653 | orchestrator |  "msg": "All assertions passed" 2026-02-13 03:54:05.490663 | orchestrator | } 2026-02-13 03:54:05.490674 | orchestrator | 2026-02-13 03:54:05.490685 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-13 03:54:05.490696 | orchestrator | Friday 13 February 2026 03:53:27 +0000 (0:00:00.820) 0:00:05.432 ******* 2026-02-13 03:54:05.490707 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:05.490718 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:05.490755 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:05.490776 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:05.490787 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:05.490798 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:05.490808 | orchestrator | 2026-02-13 03:54:05.490819 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-13 03:54:05.490830 | orchestrator | Friday 13 February 2026 03:53:28 +0000 (0:00:00.590) 0:00:06.022 ******* 2026-02-13 03:54:05.490841 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-13 03:54:05.490852 | orchestrator | 2026-02-13 03:54:05.490863 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-13 03:54:05.490874 | orchestrator | Friday 13 February 2026 03:53:32 +0000 (0:00:03.887) 0:00:09.910 ******* 2026-02-13 03:54:05.490885 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-13 03:54:05.490897 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-13 03:54:05.490908 | orchestrator | 2026-02-13 03:54:05.490936 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-13 03:54:05.490948 | orchestrator | Friday 13 February 2026 03:53:38 +0000 (0:00:06.286) 0:00:16.196 ******* 2026-02-13 03:54:05.490959 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 03:54:05.490970 | orchestrator | 2026-02-13 03:54:05.490981 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-13 03:54:05.490991 | orchestrator | Friday 13 February 2026 03:53:41 +0000 (0:00:03.138) 0:00:19.334 ******* 2026-02-13 03:54:05.491002 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 03:54:05.491013 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-13 03:54:05.491024 | orchestrator | 2026-02-13 03:54:05.491035 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-13 03:54:05.491045 | orchestrator | Friday 13 February 2026 03:53:45 +0000 (0:00:04.116) 0:00:23.451 ******* 2026-02-13 03:54:05.491056 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 03:54:05.491067 | orchestrator | 2026-02-13 03:54:05.491077 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-13 03:54:05.491088 | orchestrator | Friday 13 February 2026 03:53:48 +0000 (0:00:03.116) 0:00:26.567 ******* 2026-02-13 03:54:05.491098 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-13 03:54:05.491109 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-13 03:54:05.491120 | orchestrator | 2026-02-13 03:54:05.491130 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-13 03:54:05.491141 | orchestrator | Friday 13 February 2026 03:53:56 +0000 (0:00:07.596) 0:00:34.164 ******* 2026-02-13 03:54:05.491152 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:05.491162 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:05.491173 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:05.491184 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:05.491194 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:05.491210 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:05.491221 | orchestrator | 2026-02-13 03:54:05.491232 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-13 03:54:05.491243 | orchestrator | Friday 13 February 2026 03:53:57 +0000 (0:00:00.742) 0:00:34.907 ******* 2026-02-13 03:54:05.491254 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:05.491264 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:05.491275 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:05.491286 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:05.491296 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:05.491307 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:05.491317 | orchestrator | 2026-02-13 03:54:05.491328 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-13 03:54:05.491339 | orchestrator | Friday 13 February 2026 03:53:59 +0000 (0:00:02.058) 0:00:36.966 ******* 2026-02-13 03:54:05.491357 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:54:05.491367 | orchestrator | ok: [testbed-node-1] 2026-02-13 03:54:05.491378 | orchestrator | ok: [testbed-node-2] 2026-02-13 03:54:05.491389 | orchestrator | ok: [testbed-node-4] 2026-02-13 03:54:05.491400 | orchestrator | ok: [testbed-node-5] 2026-02-13 03:54:05.491410 | orchestrator | ok: [testbed-node-3] 2026-02-13 03:54:05.491421 | orchestrator | 2026-02-13 03:54:05.491432 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-13 03:54:05.491443 | orchestrator | Friday 13 February 2026 03:54:01 +0000 (0:00:01.969) 0:00:38.935 ******* 2026-02-13 03:54:05.491453 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:05.491464 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:05.491475 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:05.491485 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:05.491496 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:05.491507 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:05.491517 | orchestrator | 2026-02-13 03:54:05.491528 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-13 03:54:05.491539 | orchestrator | Friday 13 February 2026 03:54:03 +0000 (0:00:01.967) 0:00:40.902 ******* 2026-02-13 03:54:05.491554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:05.491580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:10.888896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:10.889047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:10.889065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:10.889078 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:10.889090 | orchestrator | 2026-02-13 03:54:10.889103 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-13 03:54:10.889115 | orchestrator | Friday 13 February 2026 03:54:05 +0000 (0:00:02.457) 0:00:43.360 ******* 2026-02-13 03:54:10.889126 | orchestrator | [WARNING]: Skipped 2026-02-13 03:54:10.889139 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-13 03:54:10.889151 | orchestrator | due to this access issue: 2026-02-13 03:54:10.889163 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-13 03:54:10.889173 | orchestrator | a directory 2026-02-13 03:54:10.889184 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 03:54:10.889195 | orchestrator | 2026-02-13 03:54:10.889206 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-13 03:54:10.889217 | orchestrator | Friday 13 February 2026 03:54:06 +0000 (0:00:00.849) 0:00:44.209 ******* 2026-02-13 03:54:10.889229 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 03:54:10.889241 | orchestrator | 2026-02-13 03:54:10.889253 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-13 03:54:10.889281 | orchestrator | Friday 13 February 2026 03:54:07 +0000 (0:00:01.261) 0:00:45.470 ******* 2026-02-13 03:54:10.889293 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:10.889319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:10.889332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:10.889343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:10.889363 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:15.670961 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:15.671094 | orchestrator | 2026-02-13 03:54:15.671114 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-13 03:54:15.671128 | orchestrator | Friday 13 February 2026 03:54:10 +0000 (0:00:03.287) 0:00:48.757 ******* 2026-02-13 03:54:15.671142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:15.671156 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:15.671169 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:15.671879 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:15.671960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:15.671976 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:15.672034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:15.672047 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:15.672068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:15.672079 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:15.672090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:15.672101 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:15.672112 | orchestrator | 2026-02-13 03:54:15.672123 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-13 03:54:15.672134 | orchestrator | Friday 13 February 2026 03:54:12 +0000 (0:00:01.981) 0:00:50.739 ******* 2026-02-13 03:54:15.672145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:15.672156 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:15.672175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:20.797198 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:20.797347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:20.797369 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:20.797418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:20.797433 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:20.797444 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:20.797456 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:20.797468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:20.797506 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:20.797518 | orchestrator | 2026-02-13 03:54:20.797530 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-13 03:54:20.797542 | orchestrator | Friday 13 February 2026 03:54:15 +0000 (0:00:02.803) 0:00:53.542 ******* 2026-02-13 03:54:20.797553 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:20.797564 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:20.797574 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:20.797585 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:20.797595 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:20.797606 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:20.797616 | orchestrator | 2026-02-13 03:54:20.797627 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-13 03:54:20.797638 | orchestrator | Friday 13 February 2026 03:54:17 +0000 (0:00:02.256) 0:00:55.798 ******* 2026-02-13 03:54:20.797649 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:20.797659 | orchestrator | 2026-02-13 03:54:20.797670 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-13 03:54:20.797698 | orchestrator | Friday 13 February 2026 03:54:18 +0000 (0:00:00.140) 0:00:55.939 ******* 2026-02-13 03:54:20.797710 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:20.797721 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:20.797757 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:20.797770 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:20.797782 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:20.797794 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:20.797807 | orchestrator | 2026-02-13 03:54:20.797820 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-13 03:54:20.797832 | orchestrator | Friday 13 February 2026 03:54:18 +0000 (0:00:00.581) 0:00:56.521 ******* 2026-02-13 03:54:20.797853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:20.797868 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:20.797881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:20.797902 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:20.797916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:20.797929 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:20.797942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:20.797956 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:20.797982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:29.016094 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:29.016204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:29.016222 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:29.016235 | orchestrator | 2026-02-13 03:54:29.016247 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-13 03:54:29.016260 | orchestrator | Friday 13 February 2026 03:54:20 +0000 (0:00:02.141) 0:00:58.662 ******* 2026-02-13 03:54:29.016273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:29.016311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:29.016325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:29.016370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:29.016384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:29.016403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:29.016415 | orchestrator | 2026-02-13 03:54:29.016431 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-13 03:54:29.016450 | orchestrator | Friday 13 February 2026 03:54:23 +0000 (0:00:03.076) 0:01:01.738 ******* 2026-02-13 03:54:29.016463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:29.016475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:29.016500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:33.623393 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:33.623597 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:33.623615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:54:33.623628 | orchestrator | 2026-02-13 03:54:33.623642 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-13 03:54:33.623655 | orchestrator | Friday 13 February 2026 03:54:28 +0000 (0:00:05.148) 0:01:06.887 ******* 2026-02-13 03:54:33.623667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:33.623697 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:33.623751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:33.623774 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:33.623785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:33.623797 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:33.623808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:33.623820 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:33.623831 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:33.623842 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:33.623859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:33.623871 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:33.623882 | orchestrator | 2026-02-13 03:54:33.623894 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-13 03:54:33.623916 | orchestrator | Friday 13 February 2026 03:54:31 +0000 (0:00:02.099) 0:01:08.986 ******* 2026-02-13 03:54:33.623929 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:33.623942 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:33.623954 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:33.623967 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:54:33.623978 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:54:33.623991 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:54:33.624003 | orchestrator | 2026-02-13 03:54:33.624015 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-13 03:54:33.624036 | orchestrator | Friday 13 February 2026 03:54:33 +0000 (0:00:02.507) 0:01:11.493 ******* 2026-02-13 03:54:52.097018 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:52.097147 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.097167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:52.097180 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.097192 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:52.097204 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.097216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:52.097287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:52.097302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:54:52.097316 | orchestrator | 2026-02-13 03:54:52.097336 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-13 03:54:52.097356 | orchestrator | Friday 13 February 2026 03:54:36 +0000 (0:00:03.245) 0:01:14.739 ******* 2026-02-13 03:54:52.097375 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.097390 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.097406 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.097422 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.097438 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.097453 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.097470 | orchestrator | 2026-02-13 03:54:52.097485 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-13 03:54:52.097501 | orchestrator | Friday 13 February 2026 03:54:39 +0000 (0:00:02.277) 0:01:17.017 ******* 2026-02-13 03:54:52.097541 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.097558 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.097591 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.097609 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.097627 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.097645 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.097664 | orchestrator | 2026-02-13 03:54:52.097684 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-13 03:54:52.097703 | orchestrator | Friday 13 February 2026 03:54:41 +0000 (0:00:02.192) 0:01:19.209 ******* 2026-02-13 03:54:52.097718 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.097732 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.097772 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.097785 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.097798 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.097811 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.097823 | orchestrator | 2026-02-13 03:54:52.097836 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-13 03:54:52.097862 | orchestrator | Friday 13 February 2026 03:54:43 +0000 (0:00:02.166) 0:01:21.376 ******* 2026-02-13 03:54:52.097875 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.097888 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.097900 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.097910 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.097921 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.097931 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.097942 | orchestrator | 2026-02-13 03:54:52.097953 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-13 03:54:52.097964 | orchestrator | Friday 13 February 2026 03:54:45 +0000 (0:00:02.103) 0:01:23.479 ******* 2026-02-13 03:54:52.097975 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.097985 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.097996 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.098006 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.098090 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.098102 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.098113 | orchestrator | 2026-02-13 03:54:52.098124 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-13 03:54:52.098135 | orchestrator | Friday 13 February 2026 03:54:47 +0000 (0:00:02.093) 0:01:25.573 ******* 2026-02-13 03:54:52.098145 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.098156 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.098166 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.098177 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:52.098197 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:52.098208 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.098218 | orchestrator | 2026-02-13 03:54:52.098229 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-13 03:54:52.098240 | orchestrator | Friday 13 February 2026 03:54:50 +0000 (0:00:02.316) 0:01:27.889 ******* 2026-02-13 03:54:52.098251 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:52.098262 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:52.098273 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:52.098283 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:52.098294 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:52.098305 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:52.098316 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:52.098327 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:52.098350 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:56.400342 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:56.400466 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-13 03:54:56.400482 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:56.400493 | orchestrator | 2026-02-13 03:54:56.400504 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-13 03:54:56.400515 | orchestrator | Friday 13 February 2026 03:54:52 +0000 (0:00:02.069) 0:01:29.959 ******* 2026-02-13 03:54:56.400528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:56.400576 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:56.400589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:56.400600 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:54:56.400611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:56.400621 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:56.400645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:56.400656 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:54:56.400684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:56.400703 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:54:56.400713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:54:56.400723 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:54:56.400733 | orchestrator | 2026-02-13 03:54:56.400786 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-13 03:54:56.400797 | orchestrator | Friday 13 February 2026 03:54:54 +0000 (0:00:02.094) 0:01:32.053 ******* 2026-02-13 03:54:56.400807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:56.400818 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:54:56.400833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:54:56.400844 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:54:56.400863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:55:21.177166 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:21.177301 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.177314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:21.177325 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.177336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:21.177348 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.177359 | orchestrator | 2026-02-13 03:55:21.177372 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-13 03:55:21.177385 | orchestrator | Friday 13 February 2026 03:54:56 +0000 (0:00:02.216) 0:01:34.269 ******* 2026-02-13 03:55:21.177395 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.177406 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.177417 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177428 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.177439 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.177450 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.177461 | orchestrator | 2026-02-13 03:55:21.177488 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-13 03:55:21.177507 | orchestrator | Friday 13 February 2026 03:54:58 +0000 (0:00:02.206) 0:01:36.476 ******* 2026-02-13 03:55:21.177526 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.177544 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.177562 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177576 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:55:21.177587 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:55:21.177598 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:55:21.177609 | orchestrator | 2026-02-13 03:55:21.177620 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-13 03:55:21.177653 | orchestrator | Friday 13 February 2026 03:55:02 +0000 (0:00:03.720) 0:01:40.196 ******* 2026-02-13 03:55:21.177665 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.177678 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.177690 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177702 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.177714 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.177726 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.177738 | orchestrator | 2026-02-13 03:55:21.177786 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-13 03:55:21.177805 | orchestrator | Friday 13 February 2026 03:55:04 +0000 (0:00:02.147) 0:01:42.344 ******* 2026-02-13 03:55:21.177817 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.177828 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.177839 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177849 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.177859 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.177870 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.177881 | orchestrator | 2026-02-13 03:55:21.177892 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-13 03:55:21.177921 | orchestrator | Friday 13 February 2026 03:55:06 +0000 (0:00:02.048) 0:01:44.392 ******* 2026-02-13 03:55:21.177933 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.177944 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.177954 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.177965 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.177976 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.177986 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.177997 | orchestrator | 2026-02-13 03:55:21.178008 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-13 03:55:21.178073 | orchestrator | Friday 13 February 2026 03:55:08 +0000 (0:00:02.345) 0:01:46.737 ******* 2026-02-13 03:55:21.178085 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178102 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.178126 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.178151 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.178169 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.178187 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.178205 | orchestrator | 2026-02-13 03:55:21.178223 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-13 03:55:21.178242 | orchestrator | Friday 13 February 2026 03:55:11 +0000 (0:00:02.174) 0:01:48.912 ******* 2026-02-13 03:55:21.178260 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.178277 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178296 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.178316 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.178333 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.178351 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.178362 | orchestrator | 2026-02-13 03:55:21.178373 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-13 03:55:21.178384 | orchestrator | Friday 13 February 2026 03:55:13 +0000 (0:00:02.076) 0:01:50.988 ******* 2026-02-13 03:55:21.178394 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178405 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.178416 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.178426 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.178437 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.178448 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.178458 | orchestrator | 2026-02-13 03:55:21.178469 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-13 03:55:21.178479 | orchestrator | Friday 13 February 2026 03:55:15 +0000 (0:00:02.023) 0:01:53.012 ******* 2026-02-13 03:55:21.178490 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178514 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.178524 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.178535 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.178545 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.178556 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.178566 | orchestrator | 2026-02-13 03:55:21.178577 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-13 03:55:21.178588 | orchestrator | Friday 13 February 2026 03:55:17 +0000 (0:00:02.197) 0:01:55.210 ******* 2026-02-13 03:55:21.178599 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178611 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:21.178622 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178633 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178643 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178654 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:21.178665 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178676 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:21.178686 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178697 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:21.178708 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-13 03:55:21.178727 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:21.178738 | orchestrator | 2026-02-13 03:55:21.178749 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-13 03:55:21.178824 | orchestrator | Friday 13 February 2026 03:55:19 +0000 (0:00:01.695) 0:01:56.906 ******* 2026-02-13 03:55:21.178837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:55:21.178851 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:55:21.178876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:55:23.886304 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:55:23.886436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-13 03:55:23.886456 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:55:23.886470 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:23.886482 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:55:23.886509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:23.886521 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:55:23.886532 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 03:55:23.886543 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:55:23.886554 | orchestrator | 2026-02-13 03:55:23.886567 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-13 03:55:23.886578 | orchestrator | Friday 13 February 2026 03:55:21 +0000 (0:00:02.137) 0:01:59.043 ******* 2026-02-13 03:55:23.886607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:55:23.886629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:55:23.886646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-13 03:55:23.886658 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:55:23.886670 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:55:23.886700 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-13 03:57:44.628627 | orchestrator | 2026-02-13 03:57:44.628769 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-13 03:57:44.628787 | orchestrator | Friday 13 February 2026 03:55:23 +0000 (0:00:02.715) 0:02:01.758 ******* 2026-02-13 03:57:44.628799 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:57:44.628865 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:57:44.628879 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:57:44.628891 | orchestrator | skipping: [testbed-node-3] 2026-02-13 03:57:44.628902 | orchestrator | skipping: [testbed-node-4] 2026-02-13 03:57:44.628913 | orchestrator | skipping: [testbed-node-5] 2026-02-13 03:57:44.628923 | orchestrator | 2026-02-13 03:57:44.628935 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-13 03:57:44.628946 | orchestrator | Friday 13 February 2026 03:55:24 +0000 (0:00:00.771) 0:02:02.530 ******* 2026-02-13 03:57:44.628957 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:57:44.628968 | orchestrator | 2026-02-13 03:57:44.628979 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-13 03:57:44.628990 | orchestrator | Friday 13 February 2026 03:55:26 +0000 (0:00:02.064) 0:02:04.595 ******* 2026-02-13 03:57:44.629001 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:57:44.629011 | orchestrator | 2026-02-13 03:57:44.629022 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-13 03:57:44.629033 | orchestrator | Friday 13 February 2026 03:55:28 +0000 (0:00:02.145) 0:02:06.740 ******* 2026-02-13 03:57:44.629044 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:57:44.629055 | orchestrator | 2026-02-13 03:57:44.629066 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629077 | orchestrator | Friday 13 February 2026 03:56:10 +0000 (0:00:41.823) 0:02:48.563 ******* 2026-02-13 03:57:44.629088 | orchestrator | 2026-02-13 03:57:44.629099 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629110 | orchestrator | Friday 13 February 2026 03:56:10 +0000 (0:00:00.068) 0:02:48.632 ******* 2026-02-13 03:57:44.629121 | orchestrator | 2026-02-13 03:57:44.629132 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629142 | orchestrator | Friday 13 February 2026 03:56:10 +0000 (0:00:00.070) 0:02:48.703 ******* 2026-02-13 03:57:44.629154 | orchestrator | 2026-02-13 03:57:44.629167 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629180 | orchestrator | Friday 13 February 2026 03:56:10 +0000 (0:00:00.082) 0:02:48.785 ******* 2026-02-13 03:57:44.629192 | orchestrator | 2026-02-13 03:57:44.629222 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629235 | orchestrator | Friday 13 February 2026 03:56:10 +0000 (0:00:00.072) 0:02:48.858 ******* 2026-02-13 03:57:44.629247 | orchestrator | 2026-02-13 03:57:44.629259 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-13 03:57:44.629271 | orchestrator | Friday 13 February 2026 03:56:11 +0000 (0:00:00.069) 0:02:48.928 ******* 2026-02-13 03:57:44.629283 | orchestrator | 2026-02-13 03:57:44.629296 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-13 03:57:44.629308 | orchestrator | Friday 13 February 2026 03:56:11 +0000 (0:00:00.071) 0:02:49.000 ******* 2026-02-13 03:57:44.629342 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:57:44.629356 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:57:44.629368 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:57:44.629380 | orchestrator | 2026-02-13 03:57:44.629393 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-13 03:57:44.629405 | orchestrator | Friday 13 February 2026 03:56:41 +0000 (0:00:30.349) 0:03:19.349 ******* 2026-02-13 03:57:44.629418 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:57:44.629430 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:57:44.629442 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:57:44.629455 | orchestrator | 2026-02-13 03:57:44.629468 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 03:57:44.629482 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:57:44.629497 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-13 03:57:44.629518 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-13 03:57:44.629583 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:57:44.629608 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:57:44.629625 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-13 03:57:44.629643 | orchestrator | 2026-02-13 03:57:44.629661 | orchestrator | 2026-02-13 03:57:44.629680 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 03:57:44.629700 | orchestrator | Friday 13 February 2026 03:57:44 +0000 (0:01:02.700) 0:04:22.050 ******* 2026-02-13 03:57:44.629719 | orchestrator | =============================================================================== 2026-02-13 03:57:44.629737 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.70s 2026-02-13 03:57:44.629756 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 41.82s 2026-02-13 03:57:44.629775 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.35s 2026-02-13 03:57:44.629846 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.60s 2026-02-13 03:57:44.629869 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.29s 2026-02-13 03:57:44.629887 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.15s 2026-02-13 03:57:44.629907 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.12s 2026-02-13 03:57:44.629925 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.89s 2026-02-13 03:57:44.629943 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.72s 2026-02-13 03:57:44.629961 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.29s 2026-02-13 03:57:44.629980 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.25s 2026-02-13 03:57:44.629999 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.14s 2026-02-13 03:57:44.630091 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.12s 2026-02-13 03:57:44.630114 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.08s 2026-02-13 03:57:44.630133 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 2.80s 2026-02-13 03:57:44.630232 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.72s 2026-02-13 03:57:44.630271 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.51s 2026-02-13 03:57:44.630289 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.46s 2026-02-13 03:57:44.630306 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 2.35s 2026-02-13 03:57:44.630324 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 2.32s 2026-02-13 03:57:46.948327 | orchestrator | 2026-02-13 03:57:46 | INFO  | Task 5bd0d10a-0d22-43e3-80d1-6bb1e6790648 (nova) was prepared for execution. 2026-02-13 03:57:46.948456 | orchestrator | 2026-02-13 03:57:46 | INFO  | It takes a moment until task 5bd0d10a-0d22-43e3-80d1-6bb1e6790648 (nova) has been started and output is visible here. 2026-02-13 03:59:40.584160 | orchestrator | 2026-02-13 03:59:40.584287 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 03:59:40.584304 | orchestrator | 2026-02-13 03:59:40.584316 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-13 03:59:40.584328 | orchestrator | Friday 13 February 2026 03:57:51 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-02-13 03:59:40.584340 | orchestrator | changed: [testbed-manager] 2026-02-13 03:59:40.584351 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.584362 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:59:40.584373 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:59:40.584384 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:59:40.584394 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:59:40.584405 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:59:40.584416 | orchestrator | 2026-02-13 03:59:40.584427 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 03:59:40.584437 | orchestrator | Friday 13 February 2026 03:57:51 +0000 (0:00:00.799) 0:00:01.071 ******* 2026-02-13 03:59:40.584448 | orchestrator | changed: [testbed-manager] 2026-02-13 03:59:40.584459 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.584469 | orchestrator | changed: [testbed-node-1] 2026-02-13 03:59:40.584480 | orchestrator | changed: [testbed-node-2] 2026-02-13 03:59:40.584491 | orchestrator | changed: [testbed-node-3] 2026-02-13 03:59:40.584501 | orchestrator | changed: [testbed-node-4] 2026-02-13 03:59:40.584512 | orchestrator | changed: [testbed-node-5] 2026-02-13 03:59:40.584523 | orchestrator | 2026-02-13 03:59:40.584534 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 03:59:40.584545 | orchestrator | Friday 13 February 2026 03:57:52 +0000 (0:00:00.849) 0:00:01.920 ******* 2026-02-13 03:59:40.584556 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-13 03:59:40.584567 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-13 03:59:40.584578 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-13 03:59:40.584589 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-13 03:59:40.584599 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-13 03:59:40.584610 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-13 03:59:40.584621 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-13 03:59:40.584631 | orchestrator | 2026-02-13 03:59:40.584642 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-13 03:59:40.584653 | orchestrator | 2026-02-13 03:59:40.584664 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-13 03:59:40.584674 | orchestrator | Friday 13 February 2026 03:57:53 +0000 (0:00:00.705) 0:00:02.626 ******* 2026-02-13 03:59:40.584685 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:59:40.584697 | orchestrator | 2026-02-13 03:59:40.584710 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-13 03:59:40.584722 | orchestrator | Friday 13 February 2026 03:57:54 +0000 (0:00:00.716) 0:00:03.342 ******* 2026-02-13 03:59:40.584735 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-13 03:59:40.584772 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-13 03:59:40.584784 | orchestrator | 2026-02-13 03:59:40.584798 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-13 03:59:40.584811 | orchestrator | Friday 13 February 2026 03:57:57 +0000 (0:00:03.690) 0:00:07.032 ******* 2026-02-13 03:59:40.584823 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:59:40.584836 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 03:59:40.584848 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.584899 | orchestrator | 2026-02-13 03:59:40.584912 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-13 03:59:40.584925 | orchestrator | Friday 13 February 2026 03:58:02 +0000 (0:00:04.087) 0:00:11.120 ******* 2026-02-13 03:59:40.584937 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.584949 | orchestrator | 2026-02-13 03:59:40.584962 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-13 03:59:40.584974 | orchestrator | Friday 13 February 2026 03:58:02 +0000 (0:00:00.633) 0:00:11.753 ******* 2026-02-13 03:59:40.584987 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.584999 | orchestrator | 2026-02-13 03:59:40.585011 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-13 03:59:40.585024 | orchestrator | Friday 13 February 2026 03:58:03 +0000 (0:00:01.248) 0:00:13.002 ******* 2026-02-13 03:59:40.585036 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.585048 | orchestrator | 2026-02-13 03:59:40.585061 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-13 03:59:40.585073 | orchestrator | Friday 13 February 2026 03:58:06 +0000 (0:00:02.570) 0:00:15.572 ******* 2026-02-13 03:59:40.585084 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.585095 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585106 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585116 | orchestrator | 2026-02-13 03:59:40.585127 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-13 03:59:40.585138 | orchestrator | Friday 13 February 2026 03:58:06 +0000 (0:00:00.316) 0:00:15.889 ******* 2026-02-13 03:59:40.585149 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:59:40.585159 | orchestrator | 2026-02-13 03:59:40.585170 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-13 03:59:40.585180 | orchestrator | Friday 13 February 2026 03:58:38 +0000 (0:00:31.752) 0:00:47.641 ******* 2026-02-13 03:59:40.585191 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.585202 | orchestrator | 2026-02-13 03:59:40.585213 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-13 03:59:40.585223 | orchestrator | Friday 13 February 2026 03:58:52 +0000 (0:00:14.155) 0:01:01.797 ******* 2026-02-13 03:59:40.585234 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:59:40.585244 | orchestrator | 2026-02-13 03:59:40.585255 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-13 03:59:40.585266 | orchestrator | Friday 13 February 2026 03:59:03 +0000 (0:00:10.450) 0:01:12.247 ******* 2026-02-13 03:59:40.585294 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:59:40.585305 | orchestrator | 2026-02-13 03:59:40.585322 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-13 03:59:40.585334 | orchestrator | Friday 13 February 2026 03:59:03 +0000 (0:00:00.647) 0:01:12.895 ******* 2026-02-13 03:59:40.585344 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.585355 | orchestrator | 2026-02-13 03:59:40.585366 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-13 03:59:40.585377 | orchestrator | Friday 13 February 2026 03:59:04 +0000 (0:00:00.457) 0:01:13.353 ******* 2026-02-13 03:59:40.585388 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:59:40.585399 | orchestrator | 2026-02-13 03:59:40.585410 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-13 03:59:40.585430 | orchestrator | Friday 13 February 2026 03:59:04 +0000 (0:00:00.590) 0:01:13.943 ******* 2026-02-13 03:59:40.585440 | orchestrator | ok: [testbed-node-0] 2026-02-13 03:59:40.585451 | orchestrator | 2026-02-13 03:59:40.585462 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-13 03:59:40.585473 | orchestrator | Friday 13 February 2026 03:59:22 +0000 (0:00:17.181) 0:01:31.125 ******* 2026-02-13 03:59:40.585483 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.585494 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585505 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585515 | orchestrator | 2026-02-13 03:59:40.585526 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-13 03:59:40.585536 | orchestrator | 2026-02-13 03:59:40.585547 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-13 03:59:40.585558 | orchestrator | Friday 13 February 2026 03:59:22 +0000 (0:00:00.302) 0:01:31.428 ******* 2026-02-13 03:59:40.585568 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 03:59:40.585579 | orchestrator | 2026-02-13 03:59:40.585590 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-13 03:59:40.585600 | orchestrator | Friday 13 February 2026 03:59:23 +0000 (0:00:00.776) 0:01:32.205 ******* 2026-02-13 03:59:40.585611 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585621 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585632 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.585642 | orchestrator | 2026-02-13 03:59:40.585653 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-13 03:59:40.585664 | orchestrator | Friday 13 February 2026 03:59:25 +0000 (0:00:02.054) 0:01:34.259 ******* 2026-02-13 03:59:40.585674 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585685 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585695 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.585706 | orchestrator | 2026-02-13 03:59:40.585716 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-13 03:59:40.585727 | orchestrator | Friday 13 February 2026 03:59:27 +0000 (0:00:02.048) 0:01:36.308 ******* 2026-02-13 03:59:40.585737 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.585748 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585758 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585769 | orchestrator | 2026-02-13 03:59:40.585779 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-13 03:59:40.585790 | orchestrator | Friday 13 February 2026 03:59:27 +0000 (0:00:00.521) 0:01:36.830 ******* 2026-02-13 03:59:40.585801 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-13 03:59:40.585811 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585822 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-13 03:59:40.585832 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585843 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-13 03:59:40.585854 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-13 03:59:40.585882 | orchestrator | 2026-02-13 03:59:40.585893 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-13 03:59:40.585904 | orchestrator | Friday 13 February 2026 03:59:35 +0000 (0:00:07.553) 0:01:44.383 ******* 2026-02-13 03:59:40.585914 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.585925 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.585936 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.585946 | orchestrator | 2026-02-13 03:59:40.585957 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-13 03:59:40.585968 | orchestrator | Friday 13 February 2026 03:59:35 +0000 (0:00:00.338) 0:01:44.722 ******* 2026-02-13 03:59:40.585979 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-13 03:59:40.585989 | orchestrator | skipping: [testbed-node-0] 2026-02-13 03:59:40.586000 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-13 03:59:40.586084 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.586100 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-13 03:59:40.586111 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.586121 | orchestrator | 2026-02-13 03:59:40.586132 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-13 03:59:40.586143 | orchestrator | Friday 13 February 2026 03:59:36 +0000 (0:00:01.065) 0:01:45.788 ******* 2026-02-13 03:59:40.586153 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.586164 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.586175 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.586185 | orchestrator | 2026-02-13 03:59:40.586196 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-13 03:59:40.586207 | orchestrator | Friday 13 February 2026 03:59:37 +0000 (0:00:00.483) 0:01:46.271 ******* 2026-02-13 03:59:40.586217 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.586228 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.586238 | orchestrator | changed: [testbed-node-0] 2026-02-13 03:59:40.586249 | orchestrator | 2026-02-13 03:59:40.586259 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-13 03:59:40.586270 | orchestrator | Friday 13 February 2026 03:59:38 +0000 (0:00:00.975) 0:01:47.247 ******* 2026-02-13 03:59:40.586281 | orchestrator | skipping: [testbed-node-1] 2026-02-13 03:59:40.586291 | orchestrator | skipping: [testbed-node-2] 2026-02-13 03:59:40.586311 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:00:57.950439 | orchestrator | 2026-02-13 04:00:57.950545 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-13 04:00:57.950562 | orchestrator | Friday 13 February 2026 03:59:40 +0000 (0:00:02.432) 0:01:49.680 ******* 2026-02-13 04:00:57.950592 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.950606 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.950617 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:00:57.950629 | orchestrator | 2026-02-13 04:00:57.950640 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-13 04:00:57.950652 | orchestrator | Friday 13 February 2026 04:00:02 +0000 (0:00:21.676) 0:02:11.356 ******* 2026-02-13 04:00:57.950663 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.950674 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.950685 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:00:57.950696 | orchestrator | 2026-02-13 04:00:57.950707 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-13 04:00:57.950717 | orchestrator | Friday 13 February 2026 04:00:14 +0000 (0:00:11.949) 0:02:23.305 ******* 2026-02-13 04:00:57.950728 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:00:57.950739 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.950750 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.950761 | orchestrator | 2026-02-13 04:00:57.950771 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-13 04:00:57.950782 | orchestrator | Friday 13 February 2026 04:00:15 +0000 (0:00:01.048) 0:02:24.353 ******* 2026-02-13 04:00:57.950793 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.950805 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.950816 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:00:57.950827 | orchestrator | 2026-02-13 04:00:57.950838 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-13 04:00:57.950849 | orchestrator | Friday 13 February 2026 04:00:27 +0000 (0:00:12.112) 0:02:36.466 ******* 2026-02-13 04:00:57.950860 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:00:57.950870 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.950881 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.950942 | orchestrator | 2026-02-13 04:00:57.950956 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-13 04:00:57.950968 | orchestrator | Friday 13 February 2026 04:00:28 +0000 (0:00:01.084) 0:02:37.550 ******* 2026-02-13 04:00:57.951006 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:00:57.951020 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:00:57.951033 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:00:57.951046 | orchestrator | 2026-02-13 04:00:57.951059 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-13 04:00:57.951072 | orchestrator | 2026-02-13 04:00:57.951085 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-13 04:00:57.951098 | orchestrator | Friday 13 February 2026 04:00:28 +0000 (0:00:00.357) 0:02:37.908 ******* 2026-02-13 04:00:57.951110 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:00:57.951124 | orchestrator | 2026-02-13 04:00:57.951137 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-13 04:00:57.951150 | orchestrator | Friday 13 February 2026 04:00:29 +0000 (0:00:00.812) 0:02:38.720 ******* 2026-02-13 04:00:57.951162 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-13 04:00:57.951176 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-13 04:00:57.951189 | orchestrator | 2026-02-13 04:00:57.951202 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-13 04:00:57.951213 | orchestrator | Friday 13 February 2026 04:00:32 +0000 (0:00:03.323) 0:02:42.043 ******* 2026-02-13 04:00:57.951224 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-13 04:00:57.951285 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-13 04:00:57.951298 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-13 04:00:57.951309 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-13 04:00:57.951321 | orchestrator | 2026-02-13 04:00:57.951332 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-13 04:00:57.951343 | orchestrator | Friday 13 February 2026 04:00:39 +0000 (0:00:06.287) 0:02:48.331 ******* 2026-02-13 04:00:57.951354 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:00:57.951365 | orchestrator | 2026-02-13 04:00:57.951375 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-13 04:00:57.951386 | orchestrator | Friday 13 February 2026 04:00:42 +0000 (0:00:03.124) 0:02:51.456 ******* 2026-02-13 04:00:57.951397 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:00:57.951407 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-13 04:00:57.951418 | orchestrator | 2026-02-13 04:00:57.951429 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-13 04:00:57.951440 | orchestrator | Friday 13 February 2026 04:00:46 +0000 (0:00:03.767) 0:02:55.224 ******* 2026-02-13 04:00:57.951451 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:00:57.951462 | orchestrator | 2026-02-13 04:00:57.951472 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-13 04:00:57.951500 | orchestrator | Friday 13 February 2026 04:00:49 +0000 (0:00:03.148) 0:02:58.372 ******* 2026-02-13 04:00:57.951511 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-13 04:00:57.951533 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-13 04:00:57.951544 | orchestrator | 2026-02-13 04:00:57.951555 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-13 04:00:57.951589 | orchestrator | Friday 13 February 2026 04:00:56 +0000 (0:00:07.282) 0:03:05.654 ******* 2026-02-13 04:00:57.951606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:00:57.951636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:00:57.951650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:00:57.951675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454499 | orchestrator | 2026-02-13 04:01:02.454510 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-13 04:01:02.454519 | orchestrator | Friday 13 February 2026 04:00:57 +0000 (0:00:01.393) 0:03:07.047 ******* 2026-02-13 04:01:02.454526 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:02.454535 | orchestrator | 2026-02-13 04:01:02.454543 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-13 04:01:02.454550 | orchestrator | Friday 13 February 2026 04:00:58 +0000 (0:00:00.141) 0:03:07.189 ******* 2026-02-13 04:01:02.454557 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:02.454565 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:02.454572 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:02.454579 | orchestrator | 2026-02-13 04:01:02.454586 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-13 04:01:02.454593 | orchestrator | Friday 13 February 2026 04:00:58 +0000 (0:00:00.305) 0:03:07.494 ******* 2026-02-13 04:01:02.454601 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:01:02.454608 | orchestrator | 2026-02-13 04:01:02.454615 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-13 04:01:02.454622 | orchestrator | Friday 13 February 2026 04:00:59 +0000 (0:00:00.689) 0:03:08.184 ******* 2026-02-13 04:01:02.454629 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:02.454637 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:02.454644 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:02.454651 | orchestrator | 2026-02-13 04:01:02.454658 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-13 04:01:02.454666 | orchestrator | Friday 13 February 2026 04:00:59 +0000 (0:00:00.510) 0:03:08.694 ******* 2026-02-13 04:01:02.454673 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:01:02.454682 | orchestrator | 2026-02-13 04:01:02.454689 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-13 04:01:02.454697 | orchestrator | Friday 13 February 2026 04:01:00 +0000 (0:00:00.558) 0:03:09.253 ******* 2026-02-13 04:01:02.454720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:02.454764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:02.454774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:02.454782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:02.454816 | orchestrator | 2026-02-13 04:01:02.454828 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-13 04:01:04.099385 | orchestrator | Friday 13 February 2026 04:01:02 +0000 (0:00:02.303) 0:03:11.556 ******* 2026-02-13 04:01:04.099506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:04.099529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:04.099543 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:04.099558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:04.099596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:04.099622 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:04.099654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:04.099668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:04.099680 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:04.099691 | orchestrator | 2026-02-13 04:01:04.099704 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-13 04:01:04.099715 | orchestrator | Friday 13 February 2026 04:01:03 +0000 (0:00:00.829) 0:03:12.386 ******* 2026-02-13 04:01:04.099727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:04.099748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:04.099760 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:04.099787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:06.536630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:06.536756 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:06.536775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:06.536814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:06.536826 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:06.536839 | orchestrator | 2026-02-13 04:01:06.536851 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-13 04:01:06.536864 | orchestrator | Friday 13 February 2026 04:01:04 +0000 (0:00:00.814) 0:03:13.200 ******* 2026-02-13 04:01:06.536891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:06.536949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:06.536965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:06.536986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:06.537003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:06.537022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:12.495247 | orchestrator | 2026-02-13 04:01:12.495349 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-13 04:01:12.495363 | orchestrator | Friday 13 February 2026 04:01:06 +0000 (0:00:02.435) 0:03:15.635 ******* 2026-02-13 04:01:12.495377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:12.495411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:12.495437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:12.495464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:12.495476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:12.495491 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:12.495526 | orchestrator | 2026-02-13 04:01:12.495535 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-13 04:01:12.495544 | orchestrator | Friday 13 February 2026 04:01:11 +0000 (0:00:05.413) 0:03:21.049 ******* 2026-02-13 04:01:12.495558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:12.495569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:12.495579 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:12.495598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:16.814596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:16.814693 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:16.814710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-13 04:01:16.814738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:01:16.814750 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:16.814760 | orchestrator | 2026-02-13 04:01:16.814772 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-13 04:01:16.814783 | orchestrator | Friday 13 February 2026 04:01:12 +0000 (0:00:00.550) 0:03:21.599 ******* 2026-02-13 04:01:16.814793 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:01:16.814803 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:01:16.814812 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:01:16.814822 | orchestrator | 2026-02-13 04:01:16.814832 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-13 04:01:16.814842 | orchestrator | Friday 13 February 2026 04:01:14 +0000 (0:00:01.531) 0:03:23.131 ******* 2026-02-13 04:01:16.814851 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:01:16.814861 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:01:16.814870 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:01:16.814880 | orchestrator | 2026-02-13 04:01:16.814889 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-13 04:01:16.814899 | orchestrator | Friday 13 February 2026 04:01:14 +0000 (0:00:00.322) 0:03:23.453 ******* 2026-02-13 04:01:16.814995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:16.815029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:16.815047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-13 04:01:16.815058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:16.815076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:01:16.815093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:04.002454 | orchestrator | 2026-02-13 04:02:04.002575 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-13 04:02:04.002593 | orchestrator | Friday 13 February 2026 04:01:16 +0000 (0:00:02.023) 0:03:25.477 ******* 2026-02-13 04:02:04.002606 | orchestrator | 2026-02-13 04:02:04.002618 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-13 04:02:04.002629 | orchestrator | Friday 13 February 2026 04:01:16 +0000 (0:00:00.141) 0:03:25.619 ******* 2026-02-13 04:02:04.002640 | orchestrator | 2026-02-13 04:02:04.002651 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-13 04:02:04.002662 | orchestrator | Friday 13 February 2026 04:01:16 +0000 (0:00:00.151) 0:03:25.770 ******* 2026-02-13 04:02:04.002673 | orchestrator | 2026-02-13 04:02:04.002683 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-13 04:02:04.002694 | orchestrator | Friday 13 February 2026 04:01:16 +0000 (0:00:00.141) 0:03:25.912 ******* 2026-02-13 04:02:04.002706 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:02:04.002717 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:02:04.002728 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:02:04.002739 | orchestrator | 2026-02-13 04:02:04.002750 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-13 04:02:04.002761 | orchestrator | Friday 13 February 2026 04:01:41 +0000 (0:00:25.202) 0:03:51.114 ******* 2026-02-13 04:02:04.002772 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:02:04.002783 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:02:04.002794 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:02:04.002804 | orchestrator | 2026-02-13 04:02:04.002815 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-13 04:02:04.002826 | orchestrator | 2026-02-13 04:02:04.002837 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-13 04:02:04.002848 | orchestrator | Friday 13 February 2026 04:01:52 +0000 (0:00:10.450) 0:04:01.565 ******* 2026-02-13 04:02:04.002860 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:02:04.002872 | orchestrator | 2026-02-13 04:02:04.002883 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-13 04:02:04.002909 | orchestrator | Friday 13 February 2026 04:01:53 +0000 (0:00:01.222) 0:04:02.788 ******* 2026-02-13 04:02:04.002921 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:04.002962 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:04.002973 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:04.003009 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:04.003024 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:04.003043 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:04.003060 | orchestrator | 2026-02-13 04:02:04.003079 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-13 04:02:04.003098 | orchestrator | Friday 13 February 2026 04:01:54 +0000 (0:00:00.761) 0:04:03.550 ******* 2026-02-13 04:02:04.003117 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:04.003136 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:04.003154 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:04.003173 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:02:04.003195 | orchestrator | 2026-02-13 04:02:04.003214 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-13 04:02:04.003233 | orchestrator | Friday 13 February 2026 04:01:55 +0000 (0:00:00.857) 0:04:04.407 ******* 2026-02-13 04:02:04.003253 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-13 04:02:04.003274 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-13 04:02:04.003294 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-13 04:02:04.003307 | orchestrator | 2026-02-13 04:02:04.003320 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-13 04:02:04.003333 | orchestrator | Friday 13 February 2026 04:01:56 +0000 (0:00:00.879) 0:04:05.287 ******* 2026-02-13 04:02:04.003347 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-13 04:02:04.003361 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-13 04:02:04.003375 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-13 04:02:04.003386 | orchestrator | 2026-02-13 04:02:04.003397 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-13 04:02:04.003407 | orchestrator | Friday 13 February 2026 04:01:57 +0000 (0:00:01.303) 0:04:06.590 ******* 2026-02-13 04:02:04.003418 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-13 04:02:04.003429 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:04.003439 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-13 04:02:04.003450 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:04.003461 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-13 04:02:04.003471 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:04.003482 | orchestrator | 2026-02-13 04:02:04.003493 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-13 04:02:04.003504 | orchestrator | Friday 13 February 2026 04:01:58 +0000 (0:00:00.558) 0:04:07.149 ******* 2026-02-13 04:02:04.003515 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-13 04:02:04.003525 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-13 04:02:04.003536 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 04:02:04.003547 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 04:02:04.003558 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:04.003569 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 04:02:04.003580 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 04:02:04.003591 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:04.003628 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 04:02:04.003647 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 04:02:04.003664 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:04.003682 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-13 04:02:04.003700 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-13 04:02:04.003735 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-13 04:02:04.003753 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-13 04:02:04.003764 | orchestrator | 2026-02-13 04:02:04.003775 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-13 04:02:04.003786 | orchestrator | Friday 13 February 2026 04:01:59 +0000 (0:00:01.219) 0:04:08.368 ******* 2026-02-13 04:02:04.003797 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:04.003808 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:04.003819 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:04.003830 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:02:04.003841 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:02:04.003851 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:02:04.003862 | orchestrator | 2026-02-13 04:02:04.003873 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-13 04:02:04.003883 | orchestrator | Friday 13 February 2026 04:02:00 +0000 (0:00:01.180) 0:04:09.549 ******* 2026-02-13 04:02:04.003894 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:04.003905 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:04.003916 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:04.003984 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:02:04.003997 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:02:04.004008 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:02:04.004019 | orchestrator | 2026-02-13 04:02:04.004030 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-13 04:02:04.004041 | orchestrator | Friday 13 February 2026 04:02:02 +0000 (0:00:01.718) 0:04:11.268 ******* 2026-02-13 04:02:04.004127 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:04.004159 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:04.004196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751259 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751388 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751499 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751579 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751620 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:05.751675 | orchestrator | 2026-02-13 04:02:05.751689 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-13 04:02:05.751701 | orchestrator | Friday 13 February 2026 04:02:04 +0000 (0:00:02.301) 0:04:13.569 ******* 2026-02-13 04:02:05.751713 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:02:05.751725 | orchestrator | 2026-02-13 04:02:05.751737 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-13 04:02:05.751754 | orchestrator | Friday 13 February 2026 04:02:05 +0000 (0:00:01.283) 0:04:14.853 ******* 2026-02-13 04:02:09.019977 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020323 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020372 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020392 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:09.020481 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:11.005129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:11.005233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:11.005267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:11.005282 | orchestrator | 2026-02-13 04:02:11.005296 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-13 04:02:11.005308 | orchestrator | Friday 13 February 2026 04:02:09 +0000 (0:00:03.707) 0:04:18.560 ******* 2026-02-13 04:02:11.005321 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:11.005361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:11.005393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:11.005405 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:11.005424 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:11.005436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:11.005447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:11.005467 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:11.005479 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:11.005499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:12.788906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:12.789064 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:12.789098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:12.789111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:12.789140 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:12.789151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:12.789161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:12.789171 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:12.789181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:12.789208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:12.789218 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:12.789228 | orchestrator | 2026-02-13 04:02:12.789239 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-13 04:02:12.789250 | orchestrator | Friday 13 February 2026 04:02:11 +0000 (0:00:01.810) 0:04:20.371 ******* 2026-02-13 04:02:12.789265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:12.789283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:12.789295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:12.789305 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:12.789315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:12.789333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:16.891526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:16.891636 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:16.891654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:16.891685 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:16.891698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:02:16.891710 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:16.891722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:16.891751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:16.891763 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:16.891780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:16.891801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:16.891812 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:16.891823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:02:16.891834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:02:16.891845 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:16.891857 | orchestrator | 2026-02-13 04:02:16.891875 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-13 04:02:16.891895 | orchestrator | Friday 13 February 2026 04:02:13 +0000 (0:00:02.206) 0:04:22.577 ******* 2026-02-13 04:02:16.891914 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:16.891965 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:16.891987 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:16.891999 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:02:16.892010 | orchestrator | 2026-02-13 04:02:16.892022 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-13 04:02:16.892032 | orchestrator | Friday 13 February 2026 04:02:14 +0000 (0:00:00.885) 0:04:23.463 ******* 2026-02-13 04:02:16.892045 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:02:16.892057 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:02:16.892070 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:02:16.892083 | orchestrator | 2026-02-13 04:02:16.892095 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-13 04:02:16.892108 | orchestrator | Friday 13 February 2026 04:02:15 +0000 (0:00:01.100) 0:04:24.563 ******* 2026-02-13 04:02:16.892121 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:02:16.892134 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:02:16.892147 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:02:16.892159 | orchestrator | 2026-02-13 04:02:16.892172 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-13 04:02:16.892184 | orchestrator | Friday 13 February 2026 04:02:16 +0000 (0:00:00.919) 0:04:25.482 ******* 2026-02-13 04:02:16.892220 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:02:16.892234 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:02:16.892246 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:02:16.892259 | orchestrator | 2026-02-13 04:02:16.892281 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-13 04:02:38.105371 | orchestrator | Friday 13 February 2026 04:02:16 +0000 (0:00:00.511) 0:04:25.994 ******* 2026-02-13 04:02:38.105532 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:02:38.105548 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:02:38.105555 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:02:38.105563 | orchestrator | 2026-02-13 04:02:38.105572 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-13 04:02:38.105580 | orchestrator | Friday 13 February 2026 04:02:17 +0000 (0:00:00.505) 0:04:26.500 ******* 2026-02-13 04:02:38.105587 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-13 04:02:38.105595 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-13 04:02:38.105602 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-13 04:02:38.105609 | orchestrator | 2026-02-13 04:02:38.105616 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-13 04:02:38.105623 | orchestrator | Friday 13 February 2026 04:02:18 +0000 (0:00:01.366) 0:04:27.866 ******* 2026-02-13 04:02:38.105649 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-13 04:02:38.105656 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-13 04:02:38.105663 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-13 04:02:38.105670 | orchestrator | 2026-02-13 04:02:38.105677 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-13 04:02:38.105684 | orchestrator | Friday 13 February 2026 04:02:19 +0000 (0:00:01.173) 0:04:29.040 ******* 2026-02-13 04:02:38.105690 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-13 04:02:38.105697 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-13 04:02:38.105704 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-13 04:02:38.105711 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-13 04:02:38.105718 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-13 04:02:38.105724 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-13 04:02:38.105731 | orchestrator | 2026-02-13 04:02:38.105738 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-13 04:02:38.105745 | orchestrator | Friday 13 February 2026 04:02:23 +0000 (0:00:03.783) 0:04:32.824 ******* 2026-02-13 04:02:38.105752 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:38.105760 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:38.105767 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:38.105774 | orchestrator | 2026-02-13 04:02:38.105781 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-13 04:02:38.105787 | orchestrator | Friday 13 February 2026 04:02:24 +0000 (0:00:00.325) 0:04:33.149 ******* 2026-02-13 04:02:38.105794 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:38.105801 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:38.105808 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:38.105815 | orchestrator | 2026-02-13 04:02:38.105822 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-13 04:02:38.105829 | orchestrator | Friday 13 February 2026 04:02:24 +0000 (0:00:00.535) 0:04:33.684 ******* 2026-02-13 04:02:38.105836 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:02:38.105843 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:02:38.105849 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:02:38.105855 | orchestrator | 2026-02-13 04:02:38.105862 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-13 04:02:38.105868 | orchestrator | Friday 13 February 2026 04:02:25 +0000 (0:00:01.224) 0:04:34.909 ******* 2026-02-13 04:02:38.105875 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-13 04:02:38.105906 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-13 04:02:38.105913 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-13 04:02:38.105921 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-13 04:02:38.105929 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-13 04:02:38.105962 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-13 04:02:38.105972 | orchestrator | 2026-02-13 04:02:38.105980 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-13 04:02:38.105987 | orchestrator | Friday 13 February 2026 04:02:29 +0000 (0:00:03.234) 0:04:38.144 ******* 2026-02-13 04:02:38.105995 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 04:02:38.106002 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 04:02:38.106009 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 04:02:38.106071 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-13 04:02:38.106079 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:02:38.106087 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-13 04:02:38.106094 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:02:38.106100 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-13 04:02:38.106106 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:02:38.106113 | orchestrator | 2026-02-13 04:02:38.106119 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-13 04:02:38.106125 | orchestrator | Friday 13 February 2026 04:02:32 +0000 (0:00:03.332) 0:04:41.477 ******* 2026-02-13 04:02:38.106132 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:38.106138 | orchestrator | 2026-02-13 04:02:38.106173 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-13 04:02:38.106180 | orchestrator | Friday 13 February 2026 04:02:32 +0000 (0:00:00.131) 0:04:41.608 ******* 2026-02-13 04:02:38.106186 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:38.106193 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:38.106199 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:38.106205 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:38.106211 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:38.106218 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:38.106225 | orchestrator | 2026-02-13 04:02:38.106231 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-13 04:02:38.106238 | orchestrator | Friday 13 February 2026 04:02:33 +0000 (0:00:00.815) 0:04:42.424 ******* 2026-02-13 04:02:38.106244 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:02:38.106250 | orchestrator | 2026-02-13 04:02:38.106256 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-13 04:02:38.106263 | orchestrator | Friday 13 February 2026 04:02:34 +0000 (0:00:00.708) 0:04:43.132 ******* 2026-02-13 04:02:38.106275 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:02:38.106282 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:02:38.106288 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:02:38.106294 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:02:38.106300 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:02:38.106306 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:02:38.106312 | orchestrator | 2026-02-13 04:02:38.106318 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-13 04:02:38.106324 | orchestrator | Friday 13 February 2026 04:02:34 +0000 (0:00:00.840) 0:04:43.972 ******* 2026-02-13 04:02:38.106343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:38.106355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:38.106362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:02:38.106375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029158 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:39.029303 | orchestrator | 2026-02-13 04:02:39.029330 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-13 04:02:39.029343 | orchestrator | Friday 13 February 2026 04:02:38 +0000 (0:00:03.547) 0:04:47.520 ******* 2026-02-13 04:02:39.029363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:44.580779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:44.580919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:44.580936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:44.580995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:02:44.581007 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:02:44.581039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581080 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:02:44.581134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:02.495660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:02.495795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:02.495812 | orchestrator | 2026-02-13 04:03:02.495827 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-13 04:03:02.496675 | orchestrator | Friday 13 February 2026 04:02:45 +0000 (0:00:06.622) 0:04:54.142 ******* 2026-02-13 04:03:02.496712 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:02.496730 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:02.496749 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:02.496769 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:02.496788 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:02.496807 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:02.496820 | orchestrator | 2026-02-13 04:03:02.496832 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-13 04:03:02.496843 | orchestrator | Friday 13 February 2026 04:02:46 +0000 (0:00:01.347) 0:04:55.490 ******* 2026-02-13 04:03:02.496854 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-13 04:03:02.496866 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-13 04:03:02.496877 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-13 04:03:02.496888 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-13 04:03:02.496899 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-13 04:03:02.496909 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-13 04:03:02.496920 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-13 04:03:02.496932 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:02.496943 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-13 04:03:02.497070 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:02.497138 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-13 04:03:02.497151 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:02.497162 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-13 04:03:02.497174 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-13 04:03:02.497214 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-13 04:03:02.497225 | orchestrator | 2026-02-13 04:03:02.497237 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-13 04:03:02.497249 | orchestrator | Friday 13 February 2026 04:02:49 +0000 (0:00:03.599) 0:04:59.090 ******* 2026-02-13 04:03:02.497259 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:02.497270 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:02.497281 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:02.497292 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:02.497302 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:02.497313 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:02.497324 | orchestrator | 2026-02-13 04:03:02.497335 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-13 04:03:02.497346 | orchestrator | Friday 13 February 2026 04:02:50 +0000 (0:00:00.592) 0:04:59.682 ******* 2026-02-13 04:03:02.497357 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-13 04:03:02.497369 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-13 04:03:02.497380 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-13 04:03:02.497391 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-13 04:03:02.497425 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-13 04:03:02.497437 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-13 04:03:02.497458 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497469 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497480 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497491 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497502 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:02.497512 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497523 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:02.497534 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-13 04:03:02.497545 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:02.497555 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497566 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497577 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497587 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497598 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497609 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-13 04:03:02.497620 | orchestrator | 2026-02-13 04:03:02.497631 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-13 04:03:02.497642 | orchestrator | Friday 13 February 2026 04:02:55 +0000 (0:00:05.247) 0:05:04.930 ******* 2026-02-13 04:03:02.497661 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 04:03:02.497672 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 04:03:02.497682 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 04:03:02.497693 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 04:03:02.497704 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 04:03:02.497715 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-13 04:03:02.497726 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-13 04:03:02.497736 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-13 04:03:02.497747 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-13 04:03:02.497758 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 04:03:02.497769 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 04:03:02.497780 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 04:03:02.497790 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-13 04:03:02.497801 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:02.497812 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-13 04:03:02.497823 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:02.497834 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-13 04:03:02.497845 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:02.497856 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 04:03:02.497866 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 04:03:02.497877 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-13 04:03:02.497888 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 04:03:02.497899 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 04:03:02.497910 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-13 04:03:02.497921 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 04:03:02.497938 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 04:03:07.510130 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-13 04:03:07.510279 | orchestrator | 2026-02-13 04:03:07.510314 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-13 04:03:07.510326 | orchestrator | Friday 13 February 2026 04:03:02 +0000 (0:00:06.649) 0:05:11.580 ******* 2026-02-13 04:03:07.510338 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:07.510358 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:07.510369 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:07.510379 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:07.510388 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:07.510398 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:07.510408 | orchestrator | 2026-02-13 04:03:07.510419 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-13 04:03:07.510429 | orchestrator | Friday 13 February 2026 04:03:03 +0000 (0:00:00.845) 0:05:12.425 ******* 2026-02-13 04:03:07.510439 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:07.510470 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:07.510480 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:07.510490 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:07.510499 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:07.510511 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:07.510523 | orchestrator | 2026-02-13 04:03:07.510535 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-13 04:03:07.510546 | orchestrator | Friday 13 February 2026 04:03:03 +0000 (0:00:00.644) 0:05:13.069 ******* 2026-02-13 04:03:07.510558 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:07.510570 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:07.510581 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:07.510592 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:03:07.510604 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:03:07.510615 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:03:07.510627 | orchestrator | 2026-02-13 04:03:07.510638 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-13 04:03:07.510650 | orchestrator | Friday 13 February 2026 04:03:06 +0000 (0:00:02.129) 0:05:15.199 ******* 2026-02-13 04:03:07.510665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:03:07.510680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:03:07.510692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:03:07.510703 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:07.510738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:03:07.510756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:03:07.510767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:03:07.510777 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:07.510788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-13 04:03:07.510798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-13 04:03:07.510816 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-13 04:03:10.740860 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:10.741025 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:03:10.741044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:03:10.741053 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:10.741062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:03:10.741071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:03:10.741079 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:10.741087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-13 04:03:10.741096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:03:10.741121 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:10.741130 | orchestrator | 2026-02-13 04:03:10.741140 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-13 04:03:10.741149 | orchestrator | Friday 13 February 2026 04:03:07 +0000 (0:00:01.496) 0:05:16.695 ******* 2026-02-13 04:03:10.741158 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-13 04:03:10.741181 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741196 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:10.741204 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-13 04:03:10.741212 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741220 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:10.741228 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-13 04:03:10.741236 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741244 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:10.741252 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-13 04:03:10.741271 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741279 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:10.741287 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-13 04:03:10.741295 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741303 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:10.741311 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-13 04:03:10.741318 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-13 04:03:10.741326 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:10.741334 | orchestrator | 2026-02-13 04:03:10.741343 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-13 04:03:10.741351 | orchestrator | Friday 13 February 2026 04:03:08 +0000 (0:00:00.880) 0:05:17.575 ******* 2026-02-13 04:03:10.741361 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:03:10.741371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:03:10.741385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-13 04:03:10.741408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925733 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925744 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-13 04:03:12.925775 | orchestrator | 2026-02-13 04:03:12.925789 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-13 04:03:12.925801 | orchestrator | Friday 13 February 2026 04:03:11 +0000 (0:00:02.610) 0:05:20.186 ******* 2026-02-13 04:03:12.925813 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:03:12.925825 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:03:12.925836 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:03:12.925846 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:03:12.925857 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:03:12.925868 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:03:12.925879 | orchestrator | 2026-02-13 04:03:12.925890 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:03:12.925901 | orchestrator | Friday 13 February 2026 04:03:11 +0000 (0:00:00.826) 0:05:21.013 ******* 2026-02-13 04:03:12.925912 | orchestrator | 2026-02-13 04:03:12.925923 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:03:12.925933 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.138) 0:05:21.152 ******* 2026-02-13 04:03:12.925944 | orchestrator | 2026-02-13 04:03:12.925989 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:03:12.926009 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.147) 0:05:21.299 ******* 2026-02-13 04:03:12.926095 | orchestrator | 2026-02-13 04:03:12.926110 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:03:12.926131 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.137) 0:05:21.437 ******* 2026-02-13 04:06:18.510242 | orchestrator | 2026-02-13 04:06:18.510385 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:06:18.510412 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.134) 0:05:21.571 ******* 2026-02-13 04:06:18.510430 | orchestrator | 2026-02-13 04:06:18.510447 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-13 04:06:18.510465 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.297) 0:05:21.869 ******* 2026-02-13 04:06:18.510475 | orchestrator | 2026-02-13 04:06:18.510485 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-13 04:06:18.510495 | orchestrator | Friday 13 February 2026 04:03:12 +0000 (0:00:00.140) 0:05:22.010 ******* 2026-02-13 04:06:18.510505 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:06:18.510516 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:06:18.510525 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:06:18.510535 | orchestrator | 2026-02-13 04:06:18.510545 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-13 04:06:18.510555 | orchestrator | Friday 13 February 2026 04:03:19 +0000 (0:00:06.734) 0:05:28.744 ******* 2026-02-13 04:06:18.510565 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:06:18.510574 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:06:18.510584 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:06:18.510594 | orchestrator | 2026-02-13 04:06:18.510603 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-13 04:06:18.510706 | orchestrator | Friday 13 February 2026 04:03:34 +0000 (0:00:14.923) 0:05:43.668 ******* 2026-02-13 04:06:18.510719 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:06:18.510729 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:06:18.510738 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:06:18.510748 | orchestrator | 2026-02-13 04:06:18.510759 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-13 04:06:18.510771 | orchestrator | Friday 13 February 2026 04:03:57 +0000 (0:00:22.668) 0:06:06.336 ******* 2026-02-13 04:06:18.510782 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:06:18.510794 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:06:18.510804 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:06:18.510815 | orchestrator | 2026-02-13 04:06:18.510826 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-13 04:06:18.510838 | orchestrator | Friday 13 February 2026 04:04:37 +0000 (0:00:40.775) 0:06:47.112 ******* 2026-02-13 04:06:18.510849 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:06:18.510860 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2026-02-13 04:06:18.510871 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:06:18.510881 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:06:18.510890 | orchestrator | 2026-02-13 04:06:18.510900 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-13 04:06:18.510909 | orchestrator | Friday 13 February 2026 04:04:44 +0000 (0:00:06.205) 0:06:53.318 ******* 2026-02-13 04:06:18.510919 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:06:18.510928 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:06:18.510938 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:06:18.510947 | orchestrator | 2026-02-13 04:06:18.510957 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-13 04:06:18.510967 | orchestrator | Friday 13 February 2026 04:04:44 +0000 (0:00:00.757) 0:06:54.075 ******* 2026-02-13 04:06:18.510976 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:06:18.510986 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:06:18.510996 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:06:18.511006 | orchestrator | 2026-02-13 04:06:18.511015 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-13 04:06:18.511026 | orchestrator | Friday 13 February 2026 04:05:13 +0000 (0:00:28.635) 0:07:22.711 ******* 2026-02-13 04:06:18.511035 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:06:18.511045 | orchestrator | 2026-02-13 04:06:18.511054 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-13 04:06:18.511064 | orchestrator | Friday 13 February 2026 04:05:13 +0000 (0:00:00.137) 0:07:22.849 ******* 2026-02-13 04:06:18.511074 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:06:18.511083 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:06:18.511092 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:18.511102 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:18.511111 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:18.511121 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-13 04:06:18.511133 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 04:06:18.511143 | orchestrator | 2026-02-13 04:06:18.511153 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-13 04:06:18.511162 | orchestrator | Friday 13 February 2026 04:05:35 +0000 (0:00:21.504) 0:07:44.353 ******* 2026-02-13 04:06:18.511172 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:06:18.511181 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:06:18.511191 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:18.511200 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:06:18.511209 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:18.511219 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:18.511236 | orchestrator | 2026-02-13 04:06:18.511246 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-13 04:06:18.511255 | orchestrator | Friday 13 February 2026 04:05:43 +0000 (0:00:08.612) 0:07:52.966 ******* 2026-02-13 04:06:18.511265 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:06:18.511274 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:18.511284 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:06:18.511293 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:18.511304 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:18.511328 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2026-02-13 04:06:18.511338 | orchestrator | 2026-02-13 04:06:18.511348 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-13 04:06:18.511377 | orchestrator | Friday 13 February 2026 04:05:47 +0000 (0:00:03.606) 0:07:56.573 ******* 2026-02-13 04:06:18.511387 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 04:06:18.511397 | orchestrator | 2026-02-13 04:06:18.511406 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-13 04:06:18.511416 | orchestrator | Friday 13 February 2026 04:06:00 +0000 (0:00:12.601) 0:08:09.174 ******* 2026-02-13 04:06:18.511425 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 04:06:18.511435 | orchestrator | 2026-02-13 04:06:18.511444 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-13 04:06:18.511454 | orchestrator | Friday 13 February 2026 04:06:01 +0000 (0:00:01.520) 0:08:10.696 ******* 2026-02-13 04:06:18.511463 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:06:18.511473 | orchestrator | 2026-02-13 04:06:18.511482 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-13 04:06:18.511492 | orchestrator | Friday 13 February 2026 04:06:03 +0000 (0:00:01.630) 0:08:12.326 ******* 2026-02-13 04:06:18.511501 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 04:06:18.511511 | orchestrator | 2026-02-13 04:06:18.511520 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-13 04:06:18.511530 | orchestrator | Friday 13 February 2026 04:06:14 +0000 (0:00:11.124) 0:08:23.451 ******* 2026-02-13 04:06:18.511539 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:06:18.511549 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:06:18.511559 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:06:18.511568 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:18.511578 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:18.511587 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:18.511596 | orchestrator | 2026-02-13 04:06:18.511606 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-13 04:06:18.511616 | orchestrator | 2026-02-13 04:06:18.511642 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-13 04:06:18.511652 | orchestrator | Friday 13 February 2026 04:06:16 +0000 (0:00:01.789) 0:08:25.240 ******* 2026-02-13 04:06:18.511662 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:06:18.511671 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:06:18.511681 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:06:18.511690 | orchestrator | 2026-02-13 04:06:18.511700 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-13 04:06:18.511710 | orchestrator | 2026-02-13 04:06:18.511719 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-13 04:06:18.511729 | orchestrator | Friday 13 February 2026 04:06:17 +0000 (0:00:00.996) 0:08:26.237 ******* 2026-02-13 04:06:18.511738 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:18.511748 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:18.511757 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:18.511767 | orchestrator | 2026-02-13 04:06:18.511776 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-13 04:06:18.511786 | orchestrator | 2026-02-13 04:06:18.511796 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-13 04:06:18.511812 | orchestrator | Friday 13 February 2026 04:06:17 +0000 (0:00:00.770) 0:08:27.007 ******* 2026-02-13 04:06:18.511822 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-13 04:06:18.511832 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-13 04:06:18.511841 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-13 04:06:18.511851 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-13 04:06:18.511861 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-13 04:06:18.511871 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:18.511880 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:06:18.511890 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-13 04:06:18.511900 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-13 04:06:18.511910 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-13 04:06:18.511920 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-13 04:06:18.511929 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-13 04:06:18.511939 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:18.511948 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:06:18.511958 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-13 04:06:18.511968 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-13 04:06:18.511977 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-13 04:06:18.511987 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-13 04:06:18.511996 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-13 04:06:18.512006 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:18.512015 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:06:18.512025 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-13 04:06:18.512035 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-13 04:06:18.512044 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-13 04:06:18.512054 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-13 04:06:18.512063 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-13 04:06:18.512073 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:18.512082 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:18.512092 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-13 04:06:18.512107 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-13 04:06:18.512116 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-13 04:06:18.512126 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-13 04:06:18.512142 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-13 04:06:21.551195 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:21.551299 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:21.551315 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-13 04:06:21.551329 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-13 04:06:21.551340 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-13 04:06:21.551351 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-13 04:06:21.551362 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-13 04:06:21.551373 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-13 04:06:21.551384 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:21.551395 | orchestrator | 2026-02-13 04:06:21.551408 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-13 04:06:21.551446 | orchestrator | 2026-02-13 04:06:21.551458 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-13 04:06:21.551469 | orchestrator | Friday 13 February 2026 04:06:19 +0000 (0:00:01.391) 0:08:28.399 ******* 2026-02-13 04:06:21.551480 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-13 04:06:21.551490 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-13 04:06:21.551501 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:21.551512 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-13 04:06:21.551523 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-13 04:06:21.551533 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:21.551544 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-13 04:06:21.551554 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-13 04:06:21.551565 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:21.551575 | orchestrator | 2026-02-13 04:06:21.551586 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-13 04:06:21.551597 | orchestrator | 2026-02-13 04:06:21.551659 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-13 04:06:21.551675 | orchestrator | Friday 13 February 2026 04:06:19 +0000 (0:00:00.557) 0:08:28.956 ******* 2026-02-13 04:06:21.551686 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:21.551697 | orchestrator | 2026-02-13 04:06:21.551708 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-13 04:06:21.551719 | orchestrator | 2026-02-13 04:06:21.551730 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-13 04:06:21.551742 | orchestrator | Friday 13 February 2026 04:06:20 +0000 (0:00:00.853) 0:08:29.810 ******* 2026-02-13 04:06:21.551756 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:21.551769 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:21.551783 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:21.551795 | orchestrator | 2026-02-13 04:06:21.551808 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:06:21.551822 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:06:21.551838 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-13 04:06:21.551852 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-13 04:06:21.551864 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-13 04:06:21.551877 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-13 04:06:21.551890 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-13 04:06:21.551902 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2026-02-13 04:06:21.551915 | orchestrator | 2026-02-13 04:06:21.551928 | orchestrator | 2026-02-13 04:06:21.551941 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:06:21.551954 | orchestrator | Friday 13 February 2026 04:06:21 +0000 (0:00:00.467) 0:08:30.278 ******* 2026-02-13 04:06:21.551967 | orchestrator | =============================================================================== 2026-02-13 04:06:21.551980 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 40.78s 2026-02-13 04:06:21.551993 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.75s 2026-02-13 04:06:21.552014 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 28.64s 2026-02-13 04:06:21.552026 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.20s 2026-02-13 04:06:21.552039 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.67s 2026-02-13 04:06:21.552052 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.68s 2026-02-13 04:06:21.552080 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.50s 2026-02-13 04:06:21.552094 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.18s 2026-02-13 04:06:21.552107 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 14.92s 2026-02-13 04:06:21.552136 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.16s 2026-02-13 04:06:21.552148 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.60s 2026-02-13 04:06:21.552159 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.11s 2026-02-13 04:06:21.552169 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.95s 2026-02-13 04:06:21.552180 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.12s 2026-02-13 04:06:21.552191 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.45s 2026-02-13 04:06:21.552202 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.45s 2026-02-13 04:06:21.552213 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.61s 2026-02-13 04:06:21.552223 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 7.55s 2026-02-13 04:06:21.552234 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.28s 2026-02-13 04:06:21.552245 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 6.73s 2026-02-13 04:06:23.816038 | orchestrator | 2026-02-13 04:06:23 | INFO  | Task 88035933-ec34-425e-ab28-dda230deb787 (horizon) was prepared for execution. 2026-02-13 04:06:23.816144 | orchestrator | 2026-02-13 04:06:23 | INFO  | It takes a moment until task 88035933-ec34-425e-ab28-dda230deb787 (horizon) has been started and output is visible here. 2026-02-13 04:06:30.961309 | orchestrator | 2026-02-13 04:06:30.961413 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:06:30.961426 | orchestrator | 2026-02-13 04:06:30.961434 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:06:30.961442 | orchestrator | Friday 13 February 2026 04:06:27 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-02-13 04:06:30.961448 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:30.961456 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:30.961463 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:30.961469 | orchestrator | 2026-02-13 04:06:30.961475 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:06:30.961482 | orchestrator | Friday 13 February 2026 04:06:28 +0000 (0:00:00.321) 0:00:00.576 ******* 2026-02-13 04:06:30.961489 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-13 04:06:30.961496 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-13 04:06:30.961503 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-13 04:06:30.961510 | orchestrator | 2026-02-13 04:06:30.961516 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-13 04:06:30.961522 | orchestrator | 2026-02-13 04:06:30.961529 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-13 04:06:30.961535 | orchestrator | Friday 13 February 2026 04:06:28 +0000 (0:00:00.444) 0:00:01.020 ******* 2026-02-13 04:06:30.961542 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:06:30.961550 | orchestrator | 2026-02-13 04:06:30.961555 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-13 04:06:30.961629 | orchestrator | Friday 13 February 2026 04:06:29 +0000 (0:00:00.512) 0:00:01.533 ******* 2026-02-13 04:06:30.961656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:30.961685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:30.961706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:30.961714 | orchestrator | 2026-02-13 04:06:30.961720 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-13 04:06:30.961727 | orchestrator | Friday 13 February 2026 04:06:30 +0000 (0:00:01.157) 0:00:02.690 ******* 2026-02-13 04:06:30.961733 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:30.961739 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:30.961746 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:30.961753 | orchestrator | 2026-02-13 04:06:30.961759 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-13 04:06:30.961766 | orchestrator | Friday 13 February 2026 04:06:30 +0000 (0:00:00.480) 0:00:03.170 ******* 2026-02-13 04:06:30.961777 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-13 04:06:36.841952 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-13 04:06:36.842118 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-13 04:06:36.842138 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-13 04:06:36.842151 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-13 04:06:36.842164 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-13 04:06:36.842176 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-13 04:06:36.842215 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-13 04:06:36.842229 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-13 04:06:36.842241 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-13 04:06:36.842253 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-13 04:06:36.842263 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-13 04:06:36.842274 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-13 04:06:36.842285 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-13 04:06:36.842296 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-13 04:06:36.842307 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-13 04:06:36.842319 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-13 04:06:36.842331 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-13 04:06:36.842342 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-13 04:06:36.842353 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-13 04:06:36.842364 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-13 04:06:36.842375 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-13 04:06:36.842387 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-13 04:06:36.842398 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-13 04:06:36.842410 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-13 04:06:36.842425 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-13 04:06:36.842437 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-13 04:06:36.842463 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-13 04:06:36.842476 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-13 04:06:36.842488 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-13 04:06:36.842499 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-13 04:06:36.842511 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-13 04:06:36.842523 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-13 04:06:36.842537 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-13 04:06:36.842550 | orchestrator | 2026-02-13 04:06:36.842585 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.842600 | orchestrator | Friday 13 February 2026 04:06:31 +0000 (0:00:00.760) 0:00:03.931 ******* 2026-02-13 04:06:36.842621 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.842636 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.842647 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.842660 | orchestrator | 2026-02-13 04:06:36.842673 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.842686 | orchestrator | Friday 13 February 2026 04:06:31 +0000 (0:00:00.314) 0:00:04.245 ******* 2026-02-13 04:06:36.842697 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.842709 | orchestrator | 2026-02-13 04:06:36.842740 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.842754 | orchestrator | Friday 13 February 2026 04:06:32 +0000 (0:00:00.296) 0:00:04.542 ******* 2026-02-13 04:06:36.842767 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.842780 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.842792 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.842803 | orchestrator | 2026-02-13 04:06:36.842815 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.842827 | orchestrator | Friday 13 February 2026 04:06:32 +0000 (0:00:00.300) 0:00:04.843 ******* 2026-02-13 04:06:36.842839 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.842850 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.842861 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.842873 | orchestrator | 2026-02-13 04:06:36.842885 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.842898 | orchestrator | Friday 13 February 2026 04:06:32 +0000 (0:00:00.332) 0:00:05.176 ******* 2026-02-13 04:06:36.842908 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.842919 | orchestrator | 2026-02-13 04:06:36.842930 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.842941 | orchestrator | Friday 13 February 2026 04:06:32 +0000 (0:00:00.115) 0:00:05.291 ******* 2026-02-13 04:06:36.842951 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.842962 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.842972 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.842982 | orchestrator | 2026-02-13 04:06:36.842992 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.843003 | orchestrator | Friday 13 February 2026 04:06:33 +0000 (0:00:00.296) 0:00:05.588 ******* 2026-02-13 04:06:36.843014 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.843025 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.843037 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.843048 | orchestrator | 2026-02-13 04:06:36.843059 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.843071 | orchestrator | Friday 13 February 2026 04:06:33 +0000 (0:00:00.498) 0:00:06.086 ******* 2026-02-13 04:06:36.843082 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843094 | orchestrator | 2026-02-13 04:06:36.843105 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.843116 | orchestrator | Friday 13 February 2026 04:06:33 +0000 (0:00:00.139) 0:00:06.225 ******* 2026-02-13 04:06:36.843127 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843139 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.843150 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.843162 | orchestrator | 2026-02-13 04:06:36.843173 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.843184 | orchestrator | Friday 13 February 2026 04:06:34 +0000 (0:00:00.308) 0:00:06.533 ******* 2026-02-13 04:06:36.843196 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.843207 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.843218 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.843230 | orchestrator | 2026-02-13 04:06:36.843241 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.843252 | orchestrator | Friday 13 February 2026 04:06:34 +0000 (0:00:00.318) 0:00:06.852 ******* 2026-02-13 04:06:36.843273 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843284 | orchestrator | 2026-02-13 04:06:36.843295 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.843306 | orchestrator | Friday 13 February 2026 04:06:34 +0000 (0:00:00.134) 0:00:06.987 ******* 2026-02-13 04:06:36.843317 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843328 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.843340 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.843352 | orchestrator | 2026-02-13 04:06:36.843363 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.843382 | orchestrator | Friday 13 February 2026 04:06:35 +0000 (0:00:00.469) 0:00:07.456 ******* 2026-02-13 04:06:36.843393 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.843404 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.843416 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.843427 | orchestrator | 2026-02-13 04:06:36.843439 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.843451 | orchestrator | Friday 13 February 2026 04:06:35 +0000 (0:00:00.332) 0:00:07.789 ******* 2026-02-13 04:06:36.843463 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843474 | orchestrator | 2026-02-13 04:06:36.843485 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.843496 | orchestrator | Friday 13 February 2026 04:06:35 +0000 (0:00:00.139) 0:00:07.928 ******* 2026-02-13 04:06:36.843507 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843518 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.843529 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.843541 | orchestrator | 2026-02-13 04:06:36.843553 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.843608 | orchestrator | Friday 13 February 2026 04:06:35 +0000 (0:00:00.297) 0:00:08.225 ******* 2026-02-13 04:06:36.843622 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:36.843634 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:36.843646 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:36.843658 | orchestrator | 2026-02-13 04:06:36.843670 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:36.843681 | orchestrator | Friday 13 February 2026 04:06:36 +0000 (0:00:00.314) 0:00:08.540 ******* 2026-02-13 04:06:36.843692 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843703 | orchestrator | 2026-02-13 04:06:36.843715 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:36.843727 | orchestrator | Friday 13 February 2026 04:06:36 +0000 (0:00:00.303) 0:00:08.844 ******* 2026-02-13 04:06:36.843739 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:36.843751 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:36.843763 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:36.843774 | orchestrator | 2026-02-13 04:06:36.843786 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:36.843808 | orchestrator | Friday 13 February 2026 04:06:36 +0000 (0:00:00.313) 0:00:09.157 ******* 2026-02-13 04:06:50.678306 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:50.678444 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:50.678473 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:50.678493 | orchestrator | 2026-02-13 04:06:50.678515 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:50.678591 | orchestrator | Friday 13 February 2026 04:06:37 +0000 (0:00:00.327) 0:00:09.485 ******* 2026-02-13 04:06:50.678611 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.678631 | orchestrator | 2026-02-13 04:06:50.678643 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:50.678654 | orchestrator | Friday 13 February 2026 04:06:37 +0000 (0:00:00.146) 0:00:09.631 ******* 2026-02-13 04:06:50.678665 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.678677 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.678714 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.678725 | orchestrator | 2026-02-13 04:06:50.678738 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:50.678749 | orchestrator | Friday 13 February 2026 04:06:37 +0000 (0:00:00.302) 0:00:09.934 ******* 2026-02-13 04:06:50.678760 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:50.678771 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:50.678781 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:50.678792 | orchestrator | 2026-02-13 04:06:50.678803 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:50.678814 | orchestrator | Friday 13 February 2026 04:06:38 +0000 (0:00:00.526) 0:00:10.460 ******* 2026-02-13 04:06:50.678827 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.678846 | orchestrator | 2026-02-13 04:06:50.678865 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:50.678883 | orchestrator | Friday 13 February 2026 04:06:38 +0000 (0:00:00.126) 0:00:10.586 ******* 2026-02-13 04:06:50.678901 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.678920 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.678939 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.678959 | orchestrator | 2026-02-13 04:06:50.678978 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:50.678996 | orchestrator | Friday 13 February 2026 04:06:38 +0000 (0:00:00.315) 0:00:10.902 ******* 2026-02-13 04:06:50.679011 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:50.679022 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:50.679033 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:50.679044 | orchestrator | 2026-02-13 04:06:50.679148 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:50.679164 | orchestrator | Friday 13 February 2026 04:06:38 +0000 (0:00:00.308) 0:00:11.210 ******* 2026-02-13 04:06:50.679175 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679186 | orchestrator | 2026-02-13 04:06:50.679197 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:50.679208 | orchestrator | Friday 13 February 2026 04:06:39 +0000 (0:00:00.133) 0:00:11.344 ******* 2026-02-13 04:06:50.679219 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679230 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.679241 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.679251 | orchestrator | 2026-02-13 04:06:50.679262 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-13 04:06:50.679273 | orchestrator | Friday 13 February 2026 04:06:39 +0000 (0:00:00.488) 0:00:11.833 ******* 2026-02-13 04:06:50.679284 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:06:50.679295 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:06:50.679306 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:06:50.679317 | orchestrator | 2026-02-13 04:06:50.679327 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-13 04:06:50.679338 | orchestrator | Friday 13 February 2026 04:06:39 +0000 (0:00:00.331) 0:00:12.165 ******* 2026-02-13 04:06:50.679349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679360 | orchestrator | 2026-02-13 04:06:50.679371 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-13 04:06:50.679396 | orchestrator | Friday 13 February 2026 04:06:39 +0000 (0:00:00.135) 0:00:12.300 ******* 2026-02-13 04:06:50.679407 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679418 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.679429 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.679440 | orchestrator | 2026-02-13 04:06:50.679451 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-13 04:06:50.679462 | orchestrator | Friday 13 February 2026 04:06:40 +0000 (0:00:00.303) 0:00:12.603 ******* 2026-02-13 04:06:50.679473 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:06:50.679484 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:06:50.679495 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:06:50.679517 | orchestrator | 2026-02-13 04:06:50.679556 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-13 04:06:50.679568 | orchestrator | Friday 13 February 2026 04:06:42 +0000 (0:00:01.960) 0:00:14.563 ******* 2026-02-13 04:06:50.679578 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-13 04:06:50.679590 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-13 04:06:50.679601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-13 04:06:50.679611 | orchestrator | 2026-02-13 04:06:50.679622 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-13 04:06:50.679633 | orchestrator | Friday 13 February 2026 04:06:44 +0000 (0:00:02.021) 0:00:16.584 ******* 2026-02-13 04:06:50.679644 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-13 04:06:50.679656 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-13 04:06:50.679666 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-13 04:06:50.679677 | orchestrator | 2026-02-13 04:06:50.679688 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-13 04:06:50.679721 | orchestrator | Friday 13 February 2026 04:06:46 +0000 (0:00:01.863) 0:00:18.448 ******* 2026-02-13 04:06:50.679733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-13 04:06:50.679744 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-13 04:06:50.679755 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-13 04:06:50.679766 | orchestrator | 2026-02-13 04:06:50.679777 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-13 04:06:50.679787 | orchestrator | Friday 13 February 2026 04:06:47 +0000 (0:00:01.463) 0:00:19.912 ******* 2026-02-13 04:06:50.679798 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679809 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.679820 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.679831 | orchestrator | 2026-02-13 04:06:50.679842 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-13 04:06:50.679853 | orchestrator | Friday 13 February 2026 04:06:47 +0000 (0:00:00.392) 0:00:20.305 ******* 2026-02-13 04:06:50.679863 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:50.679874 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:50.679885 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:50.679896 | orchestrator | 2026-02-13 04:06:50.679911 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-13 04:06:50.679930 | orchestrator | Friday 13 February 2026 04:06:48 +0000 (0:00:00.294) 0:00:20.599 ******* 2026-02-13 04:06:50.679948 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:06:50.679968 | orchestrator | 2026-02-13 04:06:50.679988 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-13 04:06:50.680006 | orchestrator | Friday 13 February 2026 04:06:48 +0000 (0:00:00.520) 0:00:21.120 ******* 2026-02-13 04:06:50.680041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:50.680079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:51.327147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:06:51.327277 | orchestrator | 2026-02-13 04:06:51.327295 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-13 04:06:51.327308 | orchestrator | Friday 13 February 2026 04:06:50 +0000 (0:00:01.868) 0:00:22.988 ******* 2026-02-13 04:06:51.327341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:51.327363 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:51.327383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:51.327395 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:51.327416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:53.864566 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:53.864685 | orchestrator | 2026-02-13 04:06:53.864706 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-13 04:06:53.864720 | orchestrator | Friday 13 February 2026 04:06:51 +0000 (0:00:00.652) 0:00:23.641 ******* 2026-02-13 04:06:53.864756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:53.864775 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:06:53.864802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:53.864835 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:06:53.864845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 04:06:53.864854 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:06:53.864862 | orchestrator | 2026-02-13 04:06:53.864908 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-13 04:06:53.864917 | orchestrator | Friday 13 February 2026 04:06:52 +0000 (0:00:00.898) 0:00:24.540 ******* 2026-02-13 04:06:53.864938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:07:39.344273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:07:39.344544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 04:07:39.344572 | orchestrator | 2026-02-13 04:07:39.344595 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-13 04:07:39.344616 | orchestrator | Friday 13 February 2026 04:06:53 +0000 (0:00:01.639) 0:00:26.179 ******* 2026-02-13 04:07:39.344634 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:07:39.344654 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:07:39.344671 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:07:39.344687 | orchestrator | 2026-02-13 04:07:39.344705 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-13 04:07:39.344721 | orchestrator | Friday 13 February 2026 04:06:54 +0000 (0:00:00.297) 0:00:26.476 ******* 2026-02-13 04:07:39.344738 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:07:39.344756 | orchestrator | 2026-02-13 04:07:39.344776 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-13 04:07:39.344795 | orchestrator | Friday 13 February 2026 04:06:54 +0000 (0:00:00.554) 0:00:27.031 ******* 2026-02-13 04:07:39.344815 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:07:39.344834 | orchestrator | 2026-02-13 04:07:39.344854 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-13 04:07:39.344873 | orchestrator | Friday 13 February 2026 04:06:56 +0000 (0:00:02.186) 0:00:29.217 ******* 2026-02-13 04:07:39.344891 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:07:39.344911 | orchestrator | 2026-02-13 04:07:39.344930 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-13 04:07:39.344948 | orchestrator | Friday 13 February 2026 04:06:59 +0000 (0:00:02.683) 0:00:31.901 ******* 2026-02-13 04:07:39.344988 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:07:39.345008 | orchestrator | 2026-02-13 04:07:39.345026 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-13 04:07:39.345042 | orchestrator | Friday 13 February 2026 04:07:15 +0000 (0:00:16.014) 0:00:47.915 ******* 2026-02-13 04:07:39.345055 | orchestrator | 2026-02-13 04:07:39.345068 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-13 04:07:39.345080 | orchestrator | Friday 13 February 2026 04:07:15 +0000 (0:00:00.076) 0:00:47.992 ******* 2026-02-13 04:07:39.345093 | orchestrator | 2026-02-13 04:07:39.345106 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-13 04:07:39.345118 | orchestrator | Friday 13 February 2026 04:07:15 +0000 (0:00:00.066) 0:00:48.058 ******* 2026-02-13 04:07:39.345132 | orchestrator | 2026-02-13 04:07:39.345144 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-13 04:07:39.345155 | orchestrator | Friday 13 February 2026 04:07:15 +0000 (0:00:00.071) 0:00:48.129 ******* 2026-02-13 04:07:39.345165 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:07:39.345176 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:07:39.345187 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:07:39.345197 | orchestrator | 2026-02-13 04:07:39.345208 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:07:39.345220 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 04:07:39.345232 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-13 04:07:39.345242 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-13 04:07:39.345253 | orchestrator | 2026-02-13 04:07:39.345264 | orchestrator | 2026-02-13 04:07:39.345274 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:07:39.345285 | orchestrator | Friday 13 February 2026 04:07:39 +0000 (0:00:23.507) 0:01:11.637 ******* 2026-02-13 04:07:39.345296 | orchestrator | =============================================================================== 2026-02-13 04:07:39.345306 | orchestrator | horizon : Restart horizon container ------------------------------------ 23.51s 2026-02-13 04:07:39.345316 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.01s 2026-02-13 04:07:39.345327 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.68s 2026-02-13 04:07:39.345338 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.19s 2026-02-13 04:07:39.345357 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.02s 2026-02-13 04:07:39.345368 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.96s 2026-02-13 04:07:39.345379 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.87s 2026-02-13 04:07:39.345419 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.86s 2026-02-13 04:07:39.345431 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.64s 2026-02-13 04:07:39.345442 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.46s 2026-02-13 04:07:39.345453 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.16s 2026-02-13 04:07:39.345464 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2026-02-13 04:07:39.345474 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2026-02-13 04:07:39.345498 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-02-13 04:07:39.719211 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2026-02-13 04:07:39.719326 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-02-13 04:07:39.719417 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2026-02-13 04:07:39.719440 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.51s 2026-02-13 04:07:39.719458 | orchestrator | horizon : Update policy file name --------------------------------------- 0.50s 2026-02-13 04:07:39.719476 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.49s 2026-02-13 04:07:42.054277 | orchestrator | 2026-02-13 04:07:42 | INFO  | Task 8b1c5221-6f06-4aa2-b10b-3faae4889b1a (skyline) was prepared for execution. 2026-02-13 04:07:42.054412 | orchestrator | 2026-02-13 04:07:42 | INFO  | It takes a moment until task 8b1c5221-6f06-4aa2-b10b-3faae4889b1a (skyline) has been started and output is visible here. 2026-02-13 04:08:12.232707 | orchestrator | 2026-02-13 04:08:12.232848 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:08:12.232868 | orchestrator | 2026-02-13 04:08:12.232881 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:08:12.232892 | orchestrator | Friday 13 February 2026 04:07:46 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-02-13 04:08:12.232904 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:08:12.232916 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:08:12.232927 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:08:12.232962 | orchestrator | 2026-02-13 04:08:12.232985 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:08:12.232996 | orchestrator | Friday 13 February 2026 04:07:46 +0000 (0:00:00.301) 0:00:00.556 ******* 2026-02-13 04:08:12.233008 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-13 04:08:12.233022 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-13 04:08:12.233041 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-13 04:08:12.233061 | orchestrator | 2026-02-13 04:08:12.233079 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-13 04:08:12.233097 | orchestrator | 2026-02-13 04:08:12.233116 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-13 04:08:12.233135 | orchestrator | Friday 13 February 2026 04:07:46 +0000 (0:00:00.417) 0:00:00.974 ******* 2026-02-13 04:08:12.233156 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:08:12.233176 | orchestrator | 2026-02-13 04:08:12.233195 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-13 04:08:12.233209 | orchestrator | Friday 13 February 2026 04:07:47 +0000 (0:00:00.552) 0:00:01.526 ******* 2026-02-13 04:08:12.233220 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-13 04:08:12.233231 | orchestrator | 2026-02-13 04:08:12.233244 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-13 04:08:12.233257 | orchestrator | Friday 13 February 2026 04:07:50 +0000 (0:00:03.301) 0:00:04.828 ******* 2026-02-13 04:08:12.233271 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-13 04:08:12.233284 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-13 04:08:12.233297 | orchestrator | 2026-02-13 04:08:12.233338 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-13 04:08:12.233351 | orchestrator | Friday 13 February 2026 04:07:56 +0000 (0:00:06.215) 0:00:11.044 ******* 2026-02-13 04:08:12.233364 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:08:12.233378 | orchestrator | 2026-02-13 04:08:12.233392 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-13 04:08:12.233405 | orchestrator | Friday 13 February 2026 04:08:00 +0000 (0:00:03.167) 0:00:14.212 ******* 2026-02-13 04:08:12.233424 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:08:12.233443 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-13 04:08:12.233496 | orchestrator | 2026-02-13 04:08:12.233518 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-13 04:08:12.233539 | orchestrator | Friday 13 February 2026 04:08:04 +0000 (0:00:03.977) 0:00:18.189 ******* 2026-02-13 04:08:12.233557 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:08:12.233576 | orchestrator | 2026-02-13 04:08:12.233588 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-13 04:08:12.233602 | orchestrator | Friday 13 February 2026 04:08:07 +0000 (0:00:03.094) 0:00:21.283 ******* 2026-02-13 04:08:12.233630 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-13 04:08:12.233641 | orchestrator | 2026-02-13 04:08:12.233652 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-13 04:08:12.233662 | orchestrator | Friday 13 February 2026 04:08:10 +0000 (0:00:03.682) 0:00:24.965 ******* 2026-02-13 04:08:12.233677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:12.233713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:12.233725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:12.233738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:12.233766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:12.233792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.080942 | orchestrator | 2026-02-13 04:08:16.081041 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-13 04:08:16.081057 | orchestrator | Friday 13 February 2026 04:08:12 +0000 (0:00:01.347) 0:00:26.313 ******* 2026-02-13 04:08:16.081069 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:08:16.081079 | orchestrator | 2026-02-13 04:08:16.081090 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-13 04:08:16.081100 | orchestrator | Friday 13 February 2026 04:08:12 +0000 (0:00:00.752) 0:00:27.065 ******* 2026-02-13 04:08:16.081113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081218 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:16.081246 | orchestrator | 2026-02-13 04:08:16.081256 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-13 04:08:16.081266 | orchestrator | Friday 13 February 2026 04:08:15 +0000 (0:00:02.454) 0:00:29.520 ******* 2026-02-13 04:08:16.081334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:16.081346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:16.081357 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:08:16.081376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298530 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:08:17.298559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298576 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:08:17.298584 | orchestrator | 2026-02-13 04:08:17.298593 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-13 04:08:17.298601 | orchestrator | Friday 13 February 2026 04:08:16 +0000 (0:00:00.646) 0:00:30.166 ******* 2026-02-13 04:08:17.298610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298659 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:08:17.298669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298684 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:08:17.298691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-13 04:08:17.298709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-13 04:08:26.075563 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:08:26.075716 | orchestrator | 2026-02-13 04:08:26.075746 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-13 04:08:26.075770 | orchestrator | Friday 13 February 2026 04:08:17 +0000 (0:00:01.209) 0:00:31.376 ******* 2026-02-13 04:08:26.075813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.075839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.075859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.075910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.075960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.075990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.076012 | orchestrator | 2026-02-13 04:08:26.076031 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-13 04:08:26.076181 | orchestrator | Friday 13 February 2026 04:08:19 +0000 (0:00:02.586) 0:00:33.962 ******* 2026-02-13 04:08:26.076201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-13 04:08:26.076220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-13 04:08:26.076238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-13 04:08:26.076257 | orchestrator | 2026-02-13 04:08:26.076304 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-13 04:08:26.076324 | orchestrator | Friday 13 February 2026 04:08:21 +0000 (0:00:01.666) 0:00:35.629 ******* 2026-02-13 04:08:26.076341 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-13 04:08:26.076377 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-13 04:08:26.076396 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-13 04:08:26.076414 | orchestrator | 2026-02-13 04:08:26.076432 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-13 04:08:26.076451 | orchestrator | Friday 13 February 2026 04:08:23 +0000 (0:00:02.087) 0:00:37.716 ******* 2026-02-13 04:08:26.076471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:26.076508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221450 | orchestrator | 2026-02-13 04:08:28.221458 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-13 04:08:28.221466 | orchestrator | Friday 13 February 2026 04:08:26 +0000 (0:00:02.445) 0:00:40.162 ******* 2026-02-13 04:08:28.221473 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:08:28.221481 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:08:28.221487 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:08:28.221494 | orchestrator | 2026-02-13 04:08:28.221514 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-13 04:08:28.221520 | orchestrator | Friday 13 February 2026 04:08:26 +0000 (0:00:00.292) 0:00:40.454 ******* 2026-02-13 04:08:28.221533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:08:28.221576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:09:01.410314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-13 04:09:01.410453 | orchestrator | 2026-02-13 04:09:01.410462 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-13 04:09:01.410468 | orchestrator | Friday 13 February 2026 04:08:28 +0000 (0:00:01.845) 0:00:42.300 ******* 2026-02-13 04:09:01.410472 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:01.410478 | orchestrator | 2026-02-13 04:09:01.410482 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-13 04:09:01.410486 | orchestrator | Friday 13 February 2026 04:08:30 +0000 (0:00:02.005) 0:00:44.305 ******* 2026-02-13 04:09:01.410489 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:01.410493 | orchestrator | 2026-02-13 04:09:01.410497 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-13 04:09:01.410501 | orchestrator | Friday 13 February 2026 04:08:32 +0000 (0:00:02.233) 0:00:46.539 ******* 2026-02-13 04:09:01.410505 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:01.410508 | orchestrator | 2026-02-13 04:09:01.410513 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-13 04:09:01.410517 | orchestrator | Friday 13 February 2026 04:08:39 +0000 (0:00:07.090) 0:00:53.630 ******* 2026-02-13 04:09:01.410521 | orchestrator | 2026-02-13 04:09:01.410525 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-13 04:09:01.410528 | orchestrator | Friday 13 February 2026 04:08:39 +0000 (0:00:00.068) 0:00:53.698 ******* 2026-02-13 04:09:01.410532 | orchestrator | 2026-02-13 04:09:01.410536 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-13 04:09:01.410540 | orchestrator | Friday 13 February 2026 04:08:39 +0000 (0:00:00.069) 0:00:53.768 ******* 2026-02-13 04:09:01.410544 | orchestrator | 2026-02-13 04:09:01.410547 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-13 04:09:01.410551 | orchestrator | Friday 13 February 2026 04:08:39 +0000 (0:00:00.071) 0:00:53.839 ******* 2026-02-13 04:09:01.410555 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:01.410559 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:09:01.410563 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:09:01.410566 | orchestrator | 2026-02-13 04:09:01.410570 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-13 04:09:01.410574 | orchestrator | Friday 13 February 2026 04:08:46 +0000 (0:00:06.714) 0:01:00.553 ******* 2026-02-13 04:09:01.410578 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:01.410582 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:09:01.410585 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:09:01.410589 | orchestrator | 2026-02-13 04:09:01.410593 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:09:01.410598 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 04:09:01.410604 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 04:09:01.410608 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 04:09:01.410612 | orchestrator | 2026-02-13 04:09:01.410615 | orchestrator | 2026-02-13 04:09:01.410619 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:09:01.410623 | orchestrator | Friday 13 February 2026 04:09:01 +0000 (0:00:14.631) 0:01:15.184 ******* 2026-02-13 04:09:01.410632 | orchestrator | =============================================================================== 2026-02-13 04:09:01.410636 | orchestrator | skyline : Restart skyline-console container ---------------------------- 14.63s 2026-02-13 04:09:01.410640 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 7.09s 2026-02-13 04:09:01.410643 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 6.71s 2026-02-13 04:09:01.410661 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 6.22s 2026-02-13 04:09:01.410665 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 3.98s 2026-02-13 04:09:01.410669 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.68s 2026-02-13 04:09:01.410673 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.30s 2026-02-13 04:09:01.410676 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.17s 2026-02-13 04:09:01.410692 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.09s 2026-02-13 04:09:01.410696 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.59s 2026-02-13 04:09:01.410699 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.45s 2026-02-13 04:09:01.410703 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.45s 2026-02-13 04:09:01.410707 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.23s 2026-02-13 04:09:01.410711 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.09s 2026-02-13 04:09:01.410714 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.01s 2026-02-13 04:09:01.410718 | orchestrator | skyline : Check skyline container --------------------------------------- 1.85s 2026-02-13 04:09:01.410722 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.67s 2026-02-13 04:09:01.410725 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.35s 2026-02-13 04:09:01.410729 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.21s 2026-02-13 04:09:01.410733 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.75s 2026-02-13 04:09:03.733936 | orchestrator | 2026-02-13 04:09:03 | INFO  | Task 59c7195a-80fc-4f52-be8b-08a315362ce8 (glance) was prepared for execution. 2026-02-13 04:09:03.734122 | orchestrator | 2026-02-13 04:09:03 | INFO  | It takes a moment until task 59c7195a-80fc-4f52-be8b-08a315362ce8 (glance) has been started and output is visible here. 2026-02-13 04:09:36.680027 | orchestrator | 2026-02-13 04:09:36.680200 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:09:36.680218 | orchestrator | 2026-02-13 04:09:36.680230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:09:36.680241 | orchestrator | Friday 13 February 2026 04:09:07 +0000 (0:00:00.258) 0:00:00.258 ******* 2026-02-13 04:09:36.680253 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:09:36.680266 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:09:36.680279 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:09:36.680290 | orchestrator | 2026-02-13 04:09:36.680302 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:09:36.680314 | orchestrator | Friday 13 February 2026 04:09:08 +0000 (0:00:00.322) 0:00:00.581 ******* 2026-02-13 04:09:36.680324 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-13 04:09:36.680332 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-13 04:09:36.680339 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-13 04:09:36.680346 | orchestrator | 2026-02-13 04:09:36.680353 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-13 04:09:36.680360 | orchestrator | 2026-02-13 04:09:36.680367 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-13 04:09:36.680395 | orchestrator | Friday 13 February 2026 04:09:08 +0000 (0:00:00.449) 0:00:01.031 ******* 2026-02-13 04:09:36.680402 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:09:36.680410 | orchestrator | 2026-02-13 04:09:36.680417 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-13 04:09:36.680423 | orchestrator | Friday 13 February 2026 04:09:09 +0000 (0:00:00.534) 0:00:01.566 ******* 2026-02-13 04:09:36.680450 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-13 04:09:36.680457 | orchestrator | 2026-02-13 04:09:36.680463 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-13 04:09:36.680470 | orchestrator | Friday 13 February 2026 04:09:12 +0000 (0:00:03.279) 0:00:04.845 ******* 2026-02-13 04:09:36.680477 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-13 04:09:36.680484 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-13 04:09:36.680491 | orchestrator | 2026-02-13 04:09:36.680498 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-13 04:09:36.680504 | orchestrator | Friday 13 February 2026 04:09:18 +0000 (0:00:06.299) 0:00:11.144 ******* 2026-02-13 04:09:36.680512 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:09:36.680520 | orchestrator | 2026-02-13 04:09:36.680527 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-13 04:09:36.680533 | orchestrator | Friday 13 February 2026 04:09:21 +0000 (0:00:03.099) 0:00:14.244 ******* 2026-02-13 04:09:36.680540 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:09:36.680547 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-13 04:09:36.680554 | orchestrator | 2026-02-13 04:09:36.680560 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-13 04:09:36.680567 | orchestrator | Friday 13 February 2026 04:09:25 +0000 (0:00:03.899) 0:00:18.143 ******* 2026-02-13 04:09:36.680575 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:09:36.680583 | orchestrator | 2026-02-13 04:09:36.680604 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-13 04:09:36.680612 | orchestrator | Friday 13 February 2026 04:09:28 +0000 (0:00:03.137) 0:00:21.281 ******* 2026-02-13 04:09:36.680620 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-13 04:09:36.680628 | orchestrator | 2026-02-13 04:09:36.680636 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-13 04:09:36.680643 | orchestrator | Friday 13 February 2026 04:09:32 +0000 (0:00:03.795) 0:00:25.077 ******* 2026-02-13 04:09:36.680673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:36.680691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:36.680716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:36.680726 | orchestrator | 2026-02-13 04:09:36.680734 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-13 04:09:36.680742 | orchestrator | Friday 13 February 2026 04:09:35 +0000 (0:00:03.271) 0:00:28.349 ******* 2026-02-13 04:09:36.680755 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:09:36.680763 | orchestrator | 2026-02-13 04:09:36.680776 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-13 04:09:51.392801 | orchestrator | Friday 13 February 2026 04:09:36 +0000 (0:00:00.688) 0:00:29.038 ******* 2026-02-13 04:09:51.392891 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:09:51.392899 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:09:51.392905 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:09:51.392911 | orchestrator | 2026-02-13 04:09:51.392917 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-13 04:09:51.392922 | orchestrator | Friday 13 February 2026 04:09:40 +0000 (0:00:03.455) 0:00:32.493 ******* 2026-02-13 04:09:51.392928 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392935 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392940 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392944 | orchestrator | 2026-02-13 04:09:51.392950 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-13 04:09:51.392954 | orchestrator | Friday 13 February 2026 04:09:41 +0000 (0:00:01.529) 0:00:34.023 ******* 2026-02-13 04:09:51.392959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392964 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392970 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:09:51.392977 | orchestrator | 2026-02-13 04:09:51.392985 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-13 04:09:51.392993 | orchestrator | Friday 13 February 2026 04:09:43 +0000 (0:00:01.436) 0:00:35.460 ******* 2026-02-13 04:09:51.393001 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:09:51.393010 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:09:51.393017 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:09:51.393025 | orchestrator | 2026-02-13 04:09:51.393032 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-13 04:09:51.393040 | orchestrator | Friday 13 February 2026 04:09:43 +0000 (0:00:00.666) 0:00:36.126 ******* 2026-02-13 04:09:51.393047 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:09:51.393055 | orchestrator | 2026-02-13 04:09:51.393064 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-13 04:09:51.393116 | orchestrator | Friday 13 February 2026 04:09:43 +0000 (0:00:00.126) 0:00:36.253 ******* 2026-02-13 04:09:51.393125 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:09:51.393130 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:09:51.393135 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:09:51.393140 | orchestrator | 2026-02-13 04:09:51.393145 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-13 04:09:51.393150 | orchestrator | Friday 13 February 2026 04:09:44 +0000 (0:00:00.293) 0:00:36.547 ******* 2026-02-13 04:09:51.393155 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:09:51.393160 | orchestrator | 2026-02-13 04:09:51.393165 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-13 04:09:51.393170 | orchestrator | Friday 13 February 2026 04:09:44 +0000 (0:00:00.693) 0:00:37.240 ******* 2026-02-13 04:09:51.393193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:51.393231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:51.393241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:09:51.393251 | orchestrator | 2026-02-13 04:09:51.393257 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-13 04:09:51.393262 | orchestrator | Friday 13 February 2026 04:09:48 +0000 (0:00:03.655) 0:00:40.895 ******* 2026-02-13 04:09:51.393272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:09:54.827579 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:09:54.827727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:09:54.827790 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:09:54.827805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:09:54.827816 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:09:54.827826 | orchestrator | 2026-02-13 04:09:54.827837 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-13 04:09:54.827848 | orchestrator | Friday 13 February 2026 04:09:51 +0000 (0:00:02.858) 0:00:43.754 ******* 2026-02-13 04:09:54.827886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:09:54.827907 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:09:54.827917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:09:54.827927 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:09:54.827951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 04:10:27.400716 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.400828 | orchestrator | 2026-02-13 04:10:27.400843 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-13 04:10:27.400873 | orchestrator | Friday 13 February 2026 04:09:54 +0000 (0:00:03.431) 0:00:47.186 ******* 2026-02-13 04:10:27.400882 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.400890 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.400898 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.400906 | orchestrator | 2026-02-13 04:10:27.400914 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-13 04:10:27.400922 | orchestrator | Friday 13 February 2026 04:09:58 +0000 (0:00:03.245) 0:00:50.431 ******* 2026-02-13 04:10:27.400947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:10:27.400959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:10:27.400991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:10:27.401072 | orchestrator | 2026-02-13 04:10:27.401082 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-13 04:10:27.401091 | orchestrator | Friday 13 February 2026 04:10:01 +0000 (0:00:03.804) 0:00:54.235 ******* 2026-02-13 04:10:27.401099 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:10:27.401106 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:10:27.401114 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:10:27.401122 | orchestrator | 2026-02-13 04:10:27.401130 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-13 04:10:27.401138 | orchestrator | Friday 13 February 2026 04:10:07 +0000 (0:00:05.663) 0:00:59.898 ******* 2026-02-13 04:10:27.401145 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401153 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401161 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401169 | orchestrator | 2026-02-13 04:10:27.401176 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-13 04:10:27.401184 | orchestrator | Friday 13 February 2026 04:10:10 +0000 (0:00:03.351) 0:01:03.250 ******* 2026-02-13 04:10:27.401192 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401200 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401207 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401215 | orchestrator | 2026-02-13 04:10:27.401223 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-13 04:10:27.401231 | orchestrator | Friday 13 February 2026 04:10:14 +0000 (0:00:03.268) 0:01:06.519 ******* 2026-02-13 04:10:27.401240 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401249 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401258 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401267 | orchestrator | 2026-02-13 04:10:27.401276 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-13 04:10:27.401285 | orchestrator | Friday 13 February 2026 04:10:17 +0000 (0:00:03.023) 0:01:09.542 ******* 2026-02-13 04:10:27.401294 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401303 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401312 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401321 | orchestrator | 2026-02-13 04:10:27.401330 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-13 04:10:27.401340 | orchestrator | Friday 13 February 2026 04:10:19 +0000 (0:00:02.803) 0:01:12.346 ******* 2026-02-13 04:10:27.401354 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401363 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401372 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401381 | orchestrator | 2026-02-13 04:10:27.401390 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-13 04:10:27.401399 | orchestrator | Friday 13 February 2026 04:10:20 +0000 (0:00:00.412) 0:01:12.758 ******* 2026-02-13 04:10:27.401408 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-13 04:10:27.401419 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:10:27.401428 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-13 04:10:27.401437 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:10:27.401446 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-13 04:10:27.401455 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:10:27.401464 | orchestrator | 2026-02-13 04:10:27.401473 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-13 04:10:27.401482 | orchestrator | Friday 13 February 2026 04:10:23 +0000 (0:00:02.871) 0:01:15.630 ******* 2026-02-13 04:10:27.401492 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:10:27.401500 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:10:27.401509 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:10:27.401519 | orchestrator | 2026-02-13 04:10:27.401528 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-13 04:10:27.401543 | orchestrator | Friday 13 February 2026 04:10:27 +0000 (0:00:04.124) 0:01:19.754 ******* 2026-02-13 04:11:37.238244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:11:37.238336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:11:37.238379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 04:11:37.238394 | orchestrator | 2026-02-13 04:11:37.238406 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-13 04:11:37.238417 | orchestrator | Friday 13 February 2026 04:10:31 +0000 (0:00:03.626) 0:01:23.380 ******* 2026-02-13 04:11:37.238428 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:11:37.238439 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:11:37.238448 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:11:37.238458 | orchestrator | 2026-02-13 04:11:37.238469 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-13 04:11:37.238478 | orchestrator | Friday 13 February 2026 04:10:31 +0000 (0:00:00.496) 0:01:23.877 ******* 2026-02-13 04:11:37.238488 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238497 | orchestrator | 2026-02-13 04:11:37.238508 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-13 04:11:37.238518 | orchestrator | Friday 13 February 2026 04:10:33 +0000 (0:00:02.112) 0:01:25.989 ******* 2026-02-13 04:11:37.238529 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238540 | orchestrator | 2026-02-13 04:11:37.238549 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-13 04:11:37.238569 | orchestrator | Friday 13 February 2026 04:10:35 +0000 (0:00:02.139) 0:01:28.129 ******* 2026-02-13 04:11:37.238580 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238590 | orchestrator | 2026-02-13 04:11:37.238601 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-13 04:11:37.238612 | orchestrator | Friday 13 February 2026 04:10:37 +0000 (0:00:02.000) 0:01:30.129 ******* 2026-02-13 04:11:37.238622 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238631 | orchestrator | 2026-02-13 04:11:37.238641 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-13 04:11:37.238651 | orchestrator | Friday 13 February 2026 04:11:05 +0000 (0:00:27.503) 0:01:57.633 ******* 2026-02-13 04:11:37.238662 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238673 | orchestrator | 2026-02-13 04:11:37.238683 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-13 04:11:37.238693 | orchestrator | Friday 13 February 2026 04:11:07 +0000 (0:00:02.061) 0:01:59.694 ******* 2026-02-13 04:11:37.238704 | orchestrator | 2026-02-13 04:11:37.238711 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-13 04:11:37.238717 | orchestrator | Friday 13 February 2026 04:11:07 +0000 (0:00:00.069) 0:01:59.763 ******* 2026-02-13 04:11:37.238723 | orchestrator | 2026-02-13 04:11:37.238729 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-13 04:11:37.238735 | orchestrator | Friday 13 February 2026 04:11:07 +0000 (0:00:00.070) 0:01:59.834 ******* 2026-02-13 04:11:37.238741 | orchestrator | 2026-02-13 04:11:37.238747 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-13 04:11:37.238754 | orchestrator | Friday 13 February 2026 04:11:07 +0000 (0:00:00.069) 0:01:59.904 ******* 2026-02-13 04:11:37.238760 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:11:37.238766 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:11:37.238772 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:11:37.238778 | orchestrator | 2026-02-13 04:11:37.238784 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:11:37.238791 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 04:11:37.238799 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-13 04:11:37.238807 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-13 04:11:37.238813 | orchestrator | 2026-02-13 04:11:37.238821 | orchestrator | 2026-02-13 04:11:37.238828 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:11:37.238835 | orchestrator | Friday 13 February 2026 04:11:37 +0000 (0:00:29.681) 0:02:29.586 ******* 2026-02-13 04:11:37.238842 | orchestrator | =============================================================================== 2026-02-13 04:11:37.238849 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.68s 2026-02-13 04:11:37.238856 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 27.50s 2026-02-13 04:11:37.238863 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.30s 2026-02-13 04:11:37.238902 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.66s 2026-02-13 04:11:37.550114 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.12s 2026-02-13 04:11:37.550188 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.90s 2026-02-13 04:11:37.550194 | orchestrator | glance : Copying over config.json files for services -------------------- 3.80s 2026-02-13 04:11:37.550199 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.80s 2026-02-13 04:11:37.550203 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.66s 2026-02-13 04:11:37.550237 | orchestrator | glance : Check glance containers ---------------------------------------- 3.63s 2026-02-13 04:11:37.550242 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.46s 2026-02-13 04:11:37.550246 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.43s 2026-02-13 04:11:37.550250 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.35s 2026-02-13 04:11:37.550255 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.28s 2026-02-13 04:11:37.550259 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.27s 2026-02-13 04:11:37.550263 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.27s 2026-02-13 04:11:37.550268 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.25s 2026-02-13 04:11:37.550272 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.14s 2026-02-13 04:11:37.550276 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.10s 2026-02-13 04:11:37.550280 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.02s 2026-02-13 04:11:39.863650 | orchestrator | 2026-02-13 04:11:39 | INFO  | Task fba069ac-af51-475a-a100-b0694cd1842d (cinder) was prepared for execution. 2026-02-13 04:11:39.863751 | orchestrator | 2026-02-13 04:11:39 | INFO  | It takes a moment until task fba069ac-af51-475a-a100-b0694cd1842d (cinder) has been started and output is visible here. 2026-02-13 04:12:14.012363 | orchestrator | 2026-02-13 04:12:14.012492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:12:14.012519 | orchestrator | 2026-02-13 04:12:14.012536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:12:14.012553 | orchestrator | Friday 13 February 2026 04:11:44 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-02-13 04:12:14.012569 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:12:14.012587 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:12:14.012603 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:12:14.012619 | orchestrator | 2026-02-13 04:12:14.012635 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:12:14.012651 | orchestrator | Friday 13 February 2026 04:11:44 +0000 (0:00:00.300) 0:00:00.556 ******* 2026-02-13 04:12:14.012666 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-13 04:12:14.012682 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-13 04:12:14.012699 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-13 04:12:14.012715 | orchestrator | 2026-02-13 04:12:14.012733 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-13 04:12:14.012749 | orchestrator | 2026-02-13 04:12:14.012764 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-13 04:12:14.012780 | orchestrator | Friday 13 February 2026 04:11:44 +0000 (0:00:00.422) 0:00:00.979 ******* 2026-02-13 04:12:14.012797 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:12:14.012842 | orchestrator | 2026-02-13 04:12:14.012860 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-13 04:12:14.012877 | orchestrator | Friday 13 February 2026 04:11:45 +0000 (0:00:00.540) 0:00:01.520 ******* 2026-02-13 04:12:14.012894 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-13 04:12:14.012911 | orchestrator | 2026-02-13 04:12:14.012928 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-13 04:12:14.012945 | orchestrator | Friday 13 February 2026 04:11:48 +0000 (0:00:03.330) 0:00:04.850 ******* 2026-02-13 04:12:14.012964 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-13 04:12:14.012981 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-13 04:12:14.013028 | orchestrator | 2026-02-13 04:12:14.013046 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-13 04:12:14.013062 | orchestrator | Friday 13 February 2026 04:11:55 +0000 (0:00:06.331) 0:00:11.182 ******* 2026-02-13 04:12:14.013080 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:12:14.013097 | orchestrator | 2026-02-13 04:12:14.013115 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-13 04:12:14.013131 | orchestrator | Friday 13 February 2026 04:11:58 +0000 (0:00:03.077) 0:00:14.259 ******* 2026-02-13 04:12:14.013147 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:12:14.013164 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-13 04:12:14.013181 | orchestrator | 2026-02-13 04:12:14.013196 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-13 04:12:14.013213 | orchestrator | Friday 13 February 2026 04:12:01 +0000 (0:00:03.839) 0:00:18.099 ******* 2026-02-13 04:12:14.013229 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:12:14.013245 | orchestrator | 2026-02-13 04:12:14.013262 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-13 04:12:14.013278 | orchestrator | Friday 13 February 2026 04:12:04 +0000 (0:00:03.005) 0:00:21.105 ******* 2026-02-13 04:12:14.013294 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-13 04:12:14.013311 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-13 04:12:14.013327 | orchestrator | 2026-02-13 04:12:14.013343 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-13 04:12:14.013360 | orchestrator | Friday 13 February 2026 04:12:11 +0000 (0:00:07.014) 0:00:28.119 ******* 2026-02-13 04:12:14.013399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:14.013448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:14.013468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:14.013499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:14.013517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:14.013540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:14.013558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:14.013588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:19.793293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:19.793405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:19.793421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:19.793449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:19.793457 | orchestrator | 2026-02-13 04:12:19.793465 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-13 04:12:19.793472 | orchestrator | Friday 13 February 2026 04:12:14 +0000 (0:00:02.106) 0:00:30.226 ******* 2026-02-13 04:12:19.793478 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:19.793485 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:19.793490 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:19.793496 | orchestrator | 2026-02-13 04:12:19.793502 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-13 04:12:19.793508 | orchestrator | Friday 13 February 2026 04:12:14 +0000 (0:00:00.500) 0:00:30.726 ******* 2026-02-13 04:12:19.793514 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:12:19.793520 | orchestrator | 2026-02-13 04:12:19.793526 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-13 04:12:19.793532 | orchestrator | Friday 13 February 2026 04:12:15 +0000 (0:00:00.558) 0:00:31.285 ******* 2026-02-13 04:12:19.793538 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-13 04:12:19.793545 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-13 04:12:19.793558 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-13 04:12:19.793563 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-13 04:12:19.793569 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-13 04:12:19.793575 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-13 04:12:19.793581 | orchestrator | 2026-02-13 04:12:19.793586 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-13 04:12:19.793592 | orchestrator | Friday 13 February 2026 04:12:16 +0000 (0:00:01.676) 0:00:32.961 ******* 2026-02-13 04:12:19.793611 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:19.793620 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:19.793630 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:19.793636 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:19.793647 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:30.714597 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-13 04:12:30.714723 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714773 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714855 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714873 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714930 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714944 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-13 04:12:30.714956 | orchestrator | 2026-02-13 04:12:30.714969 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-13 04:12:30.714982 | orchestrator | Friday 13 February 2026 04:12:20 +0000 (0:00:03.282) 0:00:36.244 ******* 2026-02-13 04:12:30.714993 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:12:30.715005 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:12:30.715016 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-13 04:12:30.715026 | orchestrator | 2026-02-13 04:12:30.715037 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-13 04:12:30.715048 | orchestrator | Friday 13 February 2026 04:12:21 +0000 (0:00:01.617) 0:00:37.861 ******* 2026-02-13 04:12:30.715067 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-13 04:12:30.715085 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-13 04:12:30.715103 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-13 04:12:30.715131 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 04:12:30.715151 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 04:12:30.715171 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-13 04:12:30.715192 | orchestrator | 2026-02-13 04:12:30.715210 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-13 04:12:30.715228 | orchestrator | Friday 13 February 2026 04:12:24 +0000 (0:00:02.748) 0:00:40.610 ******* 2026-02-13 04:12:30.715243 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-13 04:12:30.715266 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-13 04:12:30.715279 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-13 04:12:30.715291 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-13 04:12:30.715304 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-13 04:12:30.715316 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-13 04:12:30.715328 | orchestrator | 2026-02-13 04:12:30.715342 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-13 04:12:30.715354 | orchestrator | Friday 13 February 2026 04:12:25 +0000 (0:00:01.101) 0:00:41.712 ******* 2026-02-13 04:12:30.715367 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:30.715380 | orchestrator | 2026-02-13 04:12:30.715392 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-13 04:12:30.715405 | orchestrator | Friday 13 February 2026 04:12:25 +0000 (0:00:00.124) 0:00:41.837 ******* 2026-02-13 04:12:30.715417 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:30.715432 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:30.715451 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:30.715469 | orchestrator | 2026-02-13 04:12:30.715487 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-13 04:12:30.715507 | orchestrator | Friday 13 February 2026 04:12:26 +0000 (0:00:00.407) 0:00:42.244 ******* 2026-02-13 04:12:30.715526 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:12:30.715545 | orchestrator | 2026-02-13 04:12:30.715563 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-13 04:12:30.715582 | orchestrator | Friday 13 February 2026 04:12:26 +0000 (0:00:00.583) 0:00:42.828 ******* 2026-02-13 04:12:30.715606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:31.463831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:31.463937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:31.463967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.463976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.463984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:31.464083 | orchestrator | 2026-02-13 04:12:31.464097 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-13 04:12:31.464110 | orchestrator | Friday 13 February 2026 04:12:30 +0000 (0:00:04.105) 0:00:46.934 ******* 2026-02-13 04:12:31.464132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:31.564979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565122 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:31.565135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:31.565146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565221 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:31.565232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:31.565242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:31.565278 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:31.565289 | orchestrator | 2026-02-13 04:12:31.565301 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-13 04:12:31.565318 | orchestrator | Friday 13 February 2026 04:12:31 +0000 (0:00:00.759) 0:00:47.693 ******* 2026-02-13 04:12:32.063424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:32.063531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063574 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:32.063589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:32.063649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063693 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:32.063705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:32.063717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:32.063744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:36.456845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:36.456957 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:36.456976 | orchestrator | 2026-02-13 04:12:36.456989 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-13 04:12:36.457001 | orchestrator | Friday 13 February 2026 04:12:32 +0000 (0:00:00.773) 0:00:48.467 ******* 2026-02-13 04:12:36.457014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:36.457028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:36.457040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:36.457093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:36.457191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064365 | orchestrator | 2026-02-13 04:12:49.064378 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-13 04:12:49.064389 | orchestrator | Friday 13 February 2026 04:12:36 +0000 (0:00:04.206) 0:00:52.674 ******* 2026-02-13 04:12:49.064400 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-13 04:12:49.064411 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-13 04:12:49.064420 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-13 04:12:49.064430 | orchestrator | 2026-02-13 04:12:49.064440 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-13 04:12:49.064449 | orchestrator | Friday 13 February 2026 04:12:38 +0000 (0:00:01.879) 0:00:54.554 ******* 2026-02-13 04:12:49.064461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:49.064494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:49.064529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:49.064542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:49.064615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:51.550825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:51.550939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:51.550981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:51.550995 | orchestrator | 2026-02-13 04:12:51.551009 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-13 04:12:51.551022 | orchestrator | Friday 13 February 2026 04:12:49 +0000 (0:00:10.733) 0:01:05.287 ******* 2026-02-13 04:12:51.551033 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:12:51.551045 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:12:51.551056 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:12:51.551067 | orchestrator | 2026-02-13 04:12:51.551078 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-13 04:12:51.551089 | orchestrator | Friday 13 February 2026 04:12:50 +0000 (0:00:01.587) 0:01:06.875 ******* 2026-02-13 04:12:51.551101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:51.551128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:51.551161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:51.551174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:51.551194 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:51.551206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:51.551218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:51.551229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:51.551256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:55.102268 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:55.102381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-13 04:12:55.102428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:12:55.102443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 04:12:55.102456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 04:12:55.102468 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:55.102480 | orchestrator | 2026-02-13 04:12:55.102493 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-13 04:12:55.102505 | orchestrator | Friday 13 February 2026 04:12:51 +0000 (0:00:00.908) 0:01:07.784 ******* 2026-02-13 04:12:55.102516 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:12:55.102527 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:12:55.102537 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:12:55.102548 | orchestrator | 2026-02-13 04:12:55.102559 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-13 04:12:55.102570 | orchestrator | Friday 13 February 2026 04:12:52 +0000 (0:00:00.566) 0:01:08.350 ******* 2026-02-13 04:12:55.102612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:55.102635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:55.102647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-13 04:12:55.102658 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:55.102670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:55.102686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:12:55.102707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-13 04:14:39.616981 | orchestrator | 2026-02-13 04:14:39.616996 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-13 04:14:39.617009 | orchestrator | Friday 13 February 2026 04:12:55 +0000 (0:00:02.965) 0:01:11.316 ******* 2026-02-13 04:14:39.617020 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:14:39.617052 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:14:39.617074 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:14:39.617085 | orchestrator | 2026-02-13 04:14:39.617096 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-13 04:14:39.617107 | orchestrator | Friday 13 February 2026 04:12:55 +0000 (0:00:00.299) 0:01:11.615 ******* 2026-02-13 04:14:39.617118 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617129 | orchestrator | 2026-02-13 04:14:39.617156 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-13 04:14:39.617168 | orchestrator | Friday 13 February 2026 04:12:57 +0000 (0:00:01.989) 0:01:13.605 ******* 2026-02-13 04:14:39.617179 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617190 | orchestrator | 2026-02-13 04:14:39.617201 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-13 04:14:39.617212 | orchestrator | Friday 13 February 2026 04:12:59 +0000 (0:00:02.170) 0:01:15.776 ******* 2026-02-13 04:14:39.617223 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617234 | orchestrator | 2026-02-13 04:14:39.617245 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-13 04:14:39.617256 | orchestrator | Friday 13 February 2026 04:13:18 +0000 (0:00:19.148) 0:01:34.924 ******* 2026-02-13 04:14:39.617269 | orchestrator | 2026-02-13 04:14:39.617282 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-13 04:14:39.617294 | orchestrator | Friday 13 February 2026 04:13:18 +0000 (0:00:00.074) 0:01:34.999 ******* 2026-02-13 04:14:39.617307 | orchestrator | 2026-02-13 04:14:39.617319 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-13 04:14:39.617331 | orchestrator | Friday 13 February 2026 04:13:18 +0000 (0:00:00.069) 0:01:35.068 ******* 2026-02-13 04:14:39.617343 | orchestrator | 2026-02-13 04:14:39.617357 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-13 04:14:39.617369 | orchestrator | Friday 13 February 2026 04:13:18 +0000 (0:00:00.070) 0:01:35.138 ******* 2026-02-13 04:14:39.617382 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617394 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:14:39.617407 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:14:39.617419 | orchestrator | 2026-02-13 04:14:39.617432 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-13 04:14:39.617445 | orchestrator | Friday 13 February 2026 04:13:51 +0000 (0:00:32.766) 0:02:07.905 ******* 2026-02-13 04:14:39.617457 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617470 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:14:39.617483 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:14:39.617495 | orchestrator | 2026-02-13 04:14:39.617507 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-13 04:14:39.617520 | orchestrator | Friday 13 February 2026 04:14:02 +0000 (0:00:10.364) 0:02:18.269 ******* 2026-02-13 04:14:39.617533 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617545 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:14:39.617558 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:14:39.617571 | orchestrator | 2026-02-13 04:14:39.617583 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-13 04:14:39.617595 | orchestrator | Friday 13 February 2026 04:14:28 +0000 (0:00:26.479) 0:02:44.748 ******* 2026-02-13 04:14:39.617608 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:14:39.617650 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:14:39.617661 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:14:39.617672 | orchestrator | 2026-02-13 04:14:39.617684 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-13 04:14:39.617695 | orchestrator | Friday 13 February 2026 04:14:39 +0000 (0:00:10.657) 0:02:55.406 ******* 2026-02-13 04:14:39.617706 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:14:39.617717 | orchestrator | 2026-02-13 04:14:39.617728 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:14:39.617739 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-13 04:14:39.617751 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:14:39.617762 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:14:39.617773 | orchestrator | 2026-02-13 04:14:39.617784 | orchestrator | 2026-02-13 04:14:39.617795 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:14:39.617806 | orchestrator | Friday 13 February 2026 04:14:39 +0000 (0:00:00.316) 0:02:55.722 ******* 2026-02-13 04:14:39.617817 | orchestrator | =============================================================================== 2026-02-13 04:14:39.617828 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 32.77s 2026-02-13 04:14:39.617839 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.48s 2026-02-13 04:14:39.617850 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.15s 2026-02-13 04:14:39.617867 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.73s 2026-02-13 04:14:39.617878 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.66s 2026-02-13 04:14:39.617917 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.36s 2026-02-13 04:14:39.617929 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.01s 2026-02-13 04:14:39.617939 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.33s 2026-02-13 04:14:39.617950 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.21s 2026-02-13 04:14:39.617961 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.11s 2026-02-13 04:14:39.617971 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.84s 2026-02-13 04:14:39.617982 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.33s 2026-02-13 04:14:39.617993 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.28s 2026-02-13 04:14:39.618004 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.08s 2026-02-13 04:14:39.618074 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.01s 2026-02-13 04:14:40.000762 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.97s 2026-02-13 04:14:40.000887 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.75s 2026-02-13 04:14:40.000912 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.17s 2026-02-13 04:14:40.000924 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.11s 2026-02-13 04:14:40.000935 | orchestrator | cinder : Creating Cinder database --------------------------------------- 1.99s 2026-02-13 04:14:42.077486 | orchestrator | 2026-02-13 04:14:42 | INFO  | Task f40ea1cd-ed23-454b-8644-8752d4b0e611 (barbican) was prepared for execution. 2026-02-13 04:14:42.077596 | orchestrator | 2026-02-13 04:14:42 | INFO  | It takes a moment until task f40ea1cd-ed23-454b-8644-8752d4b0e611 (barbican) has been started and output is visible here. 2026-02-13 04:15:24.317823 | orchestrator | 2026-02-13 04:15:24.317966 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:15:24.317995 | orchestrator | 2026-02-13 04:15:24.318088 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:15:24.318114 | orchestrator | Friday 13 February 2026 04:14:46 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-02-13 04:15:24.318133 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:15:24.318154 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:15:24.318174 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:15:24.318194 | orchestrator | 2026-02-13 04:15:24.318207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:15:24.318218 | orchestrator | Friday 13 February 2026 04:14:46 +0000 (0:00:00.315) 0:00:00.575 ******* 2026-02-13 04:15:24.318229 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-13 04:15:24.318240 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-13 04:15:24.318251 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-13 04:15:24.318262 | orchestrator | 2026-02-13 04:15:24.318273 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-13 04:15:24.318284 | orchestrator | 2026-02-13 04:15:24.318295 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-13 04:15:24.318306 | orchestrator | Friday 13 February 2026 04:14:46 +0000 (0:00:00.425) 0:00:01.000 ******* 2026-02-13 04:15:24.318318 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:15:24.318332 | orchestrator | 2026-02-13 04:15:24.318345 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-13 04:15:24.318358 | orchestrator | Friday 13 February 2026 04:14:47 +0000 (0:00:00.547) 0:00:01.547 ******* 2026-02-13 04:15:24.318371 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-13 04:15:24.318383 | orchestrator | 2026-02-13 04:15:24.318395 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-13 04:15:24.318407 | orchestrator | Friday 13 February 2026 04:14:50 +0000 (0:00:03.321) 0:00:04.869 ******* 2026-02-13 04:15:24.318421 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-13 04:15:24.318434 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-13 04:15:24.318446 | orchestrator | 2026-02-13 04:15:24.318458 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-13 04:15:24.318471 | orchestrator | Friday 13 February 2026 04:14:57 +0000 (0:00:06.358) 0:00:11.228 ******* 2026-02-13 04:15:24.318484 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:15:24.318495 | orchestrator | 2026-02-13 04:15:24.318506 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-13 04:15:24.318516 | orchestrator | Friday 13 February 2026 04:15:00 +0000 (0:00:03.112) 0:00:14.340 ******* 2026-02-13 04:15:24.318527 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:15:24.318538 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-13 04:15:24.318549 | orchestrator | 2026-02-13 04:15:24.318560 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-13 04:15:24.318601 | orchestrator | Friday 13 February 2026 04:15:03 +0000 (0:00:03.831) 0:00:18.172 ******* 2026-02-13 04:15:24.318628 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:15:24.318672 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-13 04:15:24.318691 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-13 04:15:24.318709 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-13 04:15:24.318726 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-13 04:15:24.318745 | orchestrator | 2026-02-13 04:15:24.318763 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-13 04:15:24.318813 | orchestrator | Friday 13 February 2026 04:15:18 +0000 (0:00:14.900) 0:00:33.072 ******* 2026-02-13 04:15:24.318832 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-13 04:15:24.318850 | orchestrator | 2026-02-13 04:15:24.318864 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-13 04:15:24.318875 | orchestrator | Friday 13 February 2026 04:15:22 +0000 (0:00:03.779) 0:00:36.852 ******* 2026-02-13 04:15:24.318890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:24.318930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:24.318943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:24.318961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:24.318985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:24.318997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:24.319017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.114703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.114786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.114796 | orchestrator | 2026-02-13 04:15:30.114803 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-13 04:15:30.114810 | orchestrator | Friday 13 February 2026 04:15:24 +0000 (0:00:01.639) 0:00:38.491 ******* 2026-02-13 04:15:30.114816 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-13 04:15:30.114821 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-13 04:15:30.114826 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-13 04:15:30.114832 | orchestrator | 2026-02-13 04:15:30.114837 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-13 04:15:30.114842 | orchestrator | Friday 13 February 2026 04:15:25 +0000 (0:00:01.105) 0:00:39.596 ******* 2026-02-13 04:15:30.114867 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:30.114873 | orchestrator | 2026-02-13 04:15:30.114878 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-13 04:15:30.114886 | orchestrator | Friday 13 February 2026 04:15:25 +0000 (0:00:00.309) 0:00:39.906 ******* 2026-02-13 04:15:30.114895 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:30.114903 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:15:30.114911 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:15:30.114919 | orchestrator | 2026-02-13 04:15:30.114927 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-13 04:15:30.114950 | orchestrator | Friday 13 February 2026 04:15:26 +0000 (0:00:00.287) 0:00:40.193 ******* 2026-02-13 04:15:30.114958 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:15:30.114966 | orchestrator | 2026-02-13 04:15:30.114975 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-13 04:15:30.114984 | orchestrator | Friday 13 February 2026 04:15:26 +0000 (0:00:00.539) 0:00:40.733 ******* 2026-02-13 04:15:30.114995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:30.115019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:30.115028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:30.115046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.115061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.115071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.115079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:30.115096 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:31.476933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:31.477040 | orchestrator | 2026-02-13 04:15:31.477057 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-13 04:15:31.477095 | orchestrator | Friday 13 February 2026 04:15:30 +0000 (0:00:03.551) 0:00:44.284 ******* 2026-02-13 04:15:31.477110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:31.477138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477162 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:31.477175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:31.477205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477237 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:15:31.477267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:31.477279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:31.477301 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:15:31.477312 | orchestrator | 2026-02-13 04:15:31.477324 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-13 04:15:31.477335 | orchestrator | Friday 13 February 2026 04:15:30 +0000 (0:00:00.603) 0:00:44.888 ******* 2026-02-13 04:15:31.477355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:35.065453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065673 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:35.065709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:35.065723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065745 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:15:35.065812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:35.065849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:35.065906 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:15:35.065917 | orchestrator | 2026-02-13 04:15:35.065931 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-13 04:15:35.065946 | orchestrator | Friday 13 February 2026 04:15:31 +0000 (0:00:00.767) 0:00:45.656 ******* 2026-02-13 04:15:35.065959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:35.065974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:35.066005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:44.569263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:44.569487 | orchestrator | 2026-02-13 04:15:44.569506 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-13 04:15:44.569525 | orchestrator | Friday 13 February 2026 04:15:35 +0000 (0:00:03.587) 0:00:49.243 ******* 2026-02-13 04:15:44.569539 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:15:44.569642 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:15:44.569664 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:15:44.569679 | orchestrator | 2026-02-13 04:15:44.569716 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-13 04:15:44.569734 | orchestrator | Friday 13 February 2026 04:15:36 +0000 (0:00:01.531) 0:00:50.775 ******* 2026-02-13 04:15:44.569752 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:15:44.569769 | orchestrator | 2026-02-13 04:15:44.569784 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-13 04:15:44.569799 | orchestrator | Friday 13 February 2026 04:15:37 +0000 (0:00:00.917) 0:00:51.692 ******* 2026-02-13 04:15:44.569815 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:44.569834 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:15:44.569851 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:15:44.569869 | orchestrator | 2026-02-13 04:15:44.569887 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-13 04:15:44.569905 | orchestrator | Friday 13 February 2026 04:15:38 +0000 (0:00:00.581) 0:00:52.274 ******* 2026-02-13 04:15:44.569978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:44.570003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:44.570141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:44.570181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:45.426998 | orchestrator | 2026-02-13 04:15:45.427012 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-13 04:15:45.427024 | orchestrator | Friday 13 February 2026 04:15:44 +0000 (0:00:06.475) 0:00:58.749 ******* 2026-02-13 04:15:45.427055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:45.427075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:45.427088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:45.427100 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:15:45.427125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:45.427137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:45.427149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:45.427160 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:15:45.427180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-13 04:15:47.820249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:15:47.820383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:15:47.820437 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:15:47.820462 | orchestrator | 2026-02-13 04:15:47.820481 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-13 04:15:47.820536 | orchestrator | Friday 13 February 2026 04:15:45 +0000 (0:00:00.854) 0:00:59.604 ******* 2026-02-13 04:15:47.820624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:47.820648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:47.820703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-13 04:15:47.820725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:15:47.820859 | orchestrator | 2026-02-13 04:15:47.820878 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-13 04:15:47.820907 | orchestrator | Friday 13 February 2026 04:15:47 +0000 (0:00:02.385) 0:01:01.989 ******* 2026-02-13 04:16:31.976947 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:16:31.977067 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:16:31.977079 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:16:31.977104 | orchestrator | 2026-02-13 04:16:31.977114 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-13 04:16:31.977121 | orchestrator | Friday 13 February 2026 04:15:48 +0000 (0:00:00.311) 0:01:02.301 ******* 2026-02-13 04:16:31.977128 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977136 | orchestrator | 2026-02-13 04:16:31.977143 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-13 04:16:31.977150 | orchestrator | Friday 13 February 2026 04:15:50 +0000 (0:00:02.028) 0:01:04.329 ******* 2026-02-13 04:16:31.977157 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977164 | orchestrator | 2026-02-13 04:16:31.977170 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-13 04:16:31.977177 | orchestrator | Friday 13 February 2026 04:15:52 +0000 (0:00:02.116) 0:01:06.446 ******* 2026-02-13 04:16:31.977184 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977191 | orchestrator | 2026-02-13 04:16:31.977197 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-13 04:16:31.977204 | orchestrator | Friday 13 February 2026 04:16:04 +0000 (0:00:12.217) 0:01:18.664 ******* 2026-02-13 04:16:31.977211 | orchestrator | 2026-02-13 04:16:31.977217 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-13 04:16:31.977224 | orchestrator | Friday 13 February 2026 04:16:04 +0000 (0:00:00.070) 0:01:18.734 ******* 2026-02-13 04:16:31.977231 | orchestrator | 2026-02-13 04:16:31.977237 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-13 04:16:31.977244 | orchestrator | Friday 13 February 2026 04:16:04 +0000 (0:00:00.068) 0:01:18.803 ******* 2026-02-13 04:16:31.977251 | orchestrator | 2026-02-13 04:16:31.977258 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-13 04:16:31.977264 | orchestrator | Friday 13 February 2026 04:16:04 +0000 (0:00:00.069) 0:01:18.873 ******* 2026-02-13 04:16:31.977271 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:16:31.977277 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977284 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:16:31.977291 | orchestrator | 2026-02-13 04:16:31.977298 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-13 04:16:31.977304 | orchestrator | Friday 13 February 2026 04:16:15 +0000 (0:00:11.222) 0:01:30.095 ******* 2026-02-13 04:16:31.977311 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977318 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:16:31.977324 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:16:31.977331 | orchestrator | 2026-02-13 04:16:31.977338 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-13 04:16:31.977345 | orchestrator | Friday 13 February 2026 04:16:25 +0000 (0:00:10.088) 0:01:40.184 ******* 2026-02-13 04:16:31.977351 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:16:31.977358 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:16:31.977364 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:16:31.977371 | orchestrator | 2026-02-13 04:16:31.977378 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:16:31.977385 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:16:31.977393 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:16:31.977400 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:16:31.977407 | orchestrator | 2026-02-13 04:16:31.977414 | orchestrator | 2026-02-13 04:16:31.977420 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:16:31.977426 | orchestrator | Friday 13 February 2026 04:16:31 +0000 (0:00:05.661) 0:01:45.845 ******* 2026-02-13 04:16:31.977432 | orchestrator | =============================================================================== 2026-02-13 04:16:31.977444 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.90s 2026-02-13 04:16:31.977451 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.22s 2026-02-13 04:16:31.977457 | orchestrator | barbican : Restart barbican-api container ------------------------------ 11.22s 2026-02-13 04:16:31.977464 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.09s 2026-02-13 04:16:31.977471 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.48s 2026-02-13 04:16:31.977478 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.36s 2026-02-13 04:16:31.977484 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.66s 2026-02-13 04:16:31.977491 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.83s 2026-02-13 04:16:31.977500 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.78s 2026-02-13 04:16:31.977507 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.59s 2026-02-13 04:16:31.977514 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.55s 2026-02-13 04:16:31.977567 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.32s 2026-02-13 04:16:31.977573 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.11s 2026-02-13 04:16:31.977580 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.39s 2026-02-13 04:16:31.977587 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.12s 2026-02-13 04:16:31.977609 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.03s 2026-02-13 04:16:31.977621 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.64s 2026-02-13 04:16:31.977629 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.53s 2026-02-13 04:16:31.977636 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.11s 2026-02-13 04:16:31.977644 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.92s 2026-02-13 04:16:34.278478 | orchestrator | 2026-02-13 04:16:34 | INFO  | Task d3677432-3cf8-4313-9ca5-366bb0e6f218 (designate) was prepared for execution. 2026-02-13 04:16:34.278688 | orchestrator | 2026-02-13 04:16:34 | INFO  | It takes a moment until task d3677432-3cf8-4313-9ca5-366bb0e6f218 (designate) has been started and output is visible here. 2026-02-13 04:17:05.074980 | orchestrator | 2026-02-13 04:17:05.075098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:17:05.075114 | orchestrator | 2026-02-13 04:17:05.075126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:17:05.075138 | orchestrator | Friday 13 February 2026 04:16:38 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-13 04:17:05.075149 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:17:05.075161 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:17:05.075172 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:17:05.075183 | orchestrator | 2026-02-13 04:17:05.075194 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:17:05.075206 | orchestrator | Friday 13 February 2026 04:16:38 +0000 (0:00:00.318) 0:00:00.596 ******* 2026-02-13 04:17:05.075217 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-13 04:17:05.075229 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-13 04:17:05.075240 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-13 04:17:05.075250 | orchestrator | 2026-02-13 04:17:05.075261 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-13 04:17:05.075272 | orchestrator | 2026-02-13 04:17:05.075283 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-13 04:17:05.075294 | orchestrator | Friday 13 February 2026 04:16:39 +0000 (0:00:00.442) 0:00:01.039 ******* 2026-02-13 04:17:05.075328 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:17:05.075341 | orchestrator | 2026-02-13 04:17:05.075351 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-13 04:17:05.075362 | orchestrator | Friday 13 February 2026 04:16:39 +0000 (0:00:00.566) 0:00:01.605 ******* 2026-02-13 04:17:05.075373 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-13 04:17:05.075383 | orchestrator | 2026-02-13 04:17:05.075394 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-13 04:17:05.075405 | orchestrator | Friday 13 February 2026 04:16:43 +0000 (0:00:03.333) 0:00:04.939 ******* 2026-02-13 04:17:05.075416 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-13 04:17:05.075427 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-13 04:17:05.075438 | orchestrator | 2026-02-13 04:17:05.075449 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-13 04:17:05.075460 | orchestrator | Friday 13 February 2026 04:16:49 +0000 (0:00:06.199) 0:00:11.138 ******* 2026-02-13 04:17:05.075471 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:17:05.075482 | orchestrator | 2026-02-13 04:17:05.075526 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-13 04:17:05.075540 | orchestrator | Friday 13 February 2026 04:16:52 +0000 (0:00:03.094) 0:00:14.233 ******* 2026-02-13 04:17:05.075553 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:17:05.075566 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-13 04:17:05.075579 | orchestrator | 2026-02-13 04:17:05.075592 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-13 04:17:05.075605 | orchestrator | Friday 13 February 2026 04:16:56 +0000 (0:00:03.886) 0:00:18.120 ******* 2026-02-13 04:17:05.075618 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:17:05.075630 | orchestrator | 2026-02-13 04:17:05.075644 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-13 04:17:05.075658 | orchestrator | Friday 13 February 2026 04:16:59 +0000 (0:00:03.024) 0:00:21.144 ******* 2026-02-13 04:17:05.075671 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-13 04:17:05.075682 | orchestrator | 2026-02-13 04:17:05.075693 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-13 04:17:05.075704 | orchestrator | Friday 13 February 2026 04:17:02 +0000 (0:00:03.685) 0:00:24.829 ******* 2026-02-13 04:17:05.075733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:05.075771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:05.075792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:05.075806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:05.075819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:05.075835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:05.075863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:05.075904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:11.059871 | orchestrator | 2026-02-13 04:17:11.059883 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-13 04:17:11.059894 | orchestrator | Friday 13 February 2026 04:17:05 +0000 (0:00:02.863) 0:00:27.692 ******* 2026-02-13 04:17:11.059902 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:11.059912 | orchestrator | 2026-02-13 04:17:11.059921 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-13 04:17:11.059930 | orchestrator | Friday 13 February 2026 04:17:05 +0000 (0:00:00.137) 0:00:27.830 ******* 2026-02-13 04:17:11.059938 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:11.059948 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:17:11.059957 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:17:11.059965 | orchestrator | 2026-02-13 04:17:11.059974 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-13 04:17:11.059983 | orchestrator | Friday 13 February 2026 04:17:06 +0000 (0:00:00.494) 0:00:28.325 ******* 2026-02-13 04:17:11.059992 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:17:11.060007 | orchestrator | 2026-02-13 04:17:11.060016 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-13 04:17:11.060025 | orchestrator | Friday 13 February 2026 04:17:06 +0000 (0:00:00.525) 0:00:28.851 ******* 2026-02-13 04:17:11.060040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:11.060058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:12.886573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:12.886723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886865 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:12.886969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:13.783299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:13.783410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:13.783434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:13.783560 | orchestrator | 2026-02-13 04:17:13.783586 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-13 04:17:13.783598 | orchestrator | Friday 13 February 2026 04:17:12 +0000 (0:00:05.883) 0:00:34.735 ******* 2026-02-13 04:17:13.783625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:13.783637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:13.783667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:13.783679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:13.783690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:13.783709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:13.783719 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:13.783735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:13.783746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:13.783756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:13.783773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598467 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:17:14.598555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:14.598571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:14.598582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598649 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:17:14.598659 | orchestrator | 2026-02-13 04:17:14.598671 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-13 04:17:14.598682 | orchestrator | Friday 13 February 2026 04:17:13 +0000 (0:00:00.996) 0:00:35.731 ******* 2026-02-13 04:17:14.598697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:14.598708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:14.598718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.598735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934768 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934786 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:14.934818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:14.934832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:14.934844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:14.934950 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:17:14.934962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:14.934975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.934987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:14.935016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:19.563182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:19.563275 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:17:19.563288 | orchestrator | 2026-02-13 04:17:19.563298 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-13 04:17:19.563308 | orchestrator | Friday 13 February 2026 04:17:14 +0000 (0:00:01.047) 0:00:36.778 ******* 2026-02-13 04:17:19.563332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:19.563343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:19.563352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:19.563391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:19.563472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.042895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043227 | orchestrator | 2026-02-13 04:17:31.043240 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-13 04:17:31.043253 | orchestrator | Friday 13 February 2026 04:17:21 +0000 (0:00:06.489) 0:00:43.268 ******* 2026-02-13 04:17:31.043272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:31.043285 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:31.043305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:31.043318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:31.043340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:39.097630 | orchestrator | 2026-02-13 04:17:39.097643 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-13 04:17:39.097656 | orchestrator | Friday 13 February 2026 04:17:35 +0000 (0:00:14.065) 0:00:57.333 ******* 2026-02-13 04:17:39.097678 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-13 04:17:43.397350 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-13 04:17:43.397436 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-13 04:17:43.397449 | orchestrator | 2026-02-13 04:17:43.397507 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-13 04:17:43.397524 | orchestrator | Friday 13 February 2026 04:17:39 +0000 (0:00:03.607) 0:01:00.940 ******* 2026-02-13 04:17:43.397538 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-13 04:17:43.397552 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-13 04:17:43.397565 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-13 04:17:43.397578 | orchestrator | 2026-02-13 04:17:43.397607 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-13 04:17:43.397621 | orchestrator | Friday 13 February 2026 04:17:41 +0000 (0:00:02.440) 0:01:03.381 ******* 2026-02-13 04:17:43.397661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:43.397679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:43.397694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:43.397731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:43.397749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:43.397769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:43.397794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:43.397810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:43.397825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:43.397839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:43.397862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:46.223895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:46.223993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:46.224002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:46.224006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:46.224010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:46.224015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:46.224029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:46.224037 | orchestrator | 2026-02-13 04:17:46.224042 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-13 04:17:46.224047 | orchestrator | Friday 13 February 2026 04:17:44 +0000 (0:00:02.975) 0:01:06.356 ******* 2026-02-13 04:17:46.224056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:46.224061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:46.224065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:46.224069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:46.224076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:47.232199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:47.232246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232254 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:47.232258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:47.232263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:47.232270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:47.232276 | orchestrator | 2026-02-13 04:17:47.232283 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-13 04:17:47.232297 | orchestrator | Friday 13 February 2026 04:17:47 +0000 (0:00:02.716) 0:01:09.072 ******* 2026-02-13 04:17:48.151933 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:48.152037 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:17:48.152051 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:17:48.152064 | orchestrator | 2026-02-13 04:17:48.152077 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-13 04:17:48.152106 | orchestrator | Friday 13 February 2026 04:17:47 +0000 (0:00:00.295) 0:01:09.368 ******* 2026-02-13 04:17:48.152122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:48.152137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:48.152151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152243 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:48.152256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:48.152267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:48.152279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:48.152331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:51.582212 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:17:51.582342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-13 04:17:51.582363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 04:17:51.582391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 04:17:51.582405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 04:17:51.582441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 04:17:51.582454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:17:51.582574 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:17:51.582588 | orchestrator | 2026-02-13 04:17:51.582620 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-13 04:17:51.582640 | orchestrator | Friday 13 February 2026 04:17:48 +0000 (0:00:00.741) 0:01:10.109 ******* 2026-02-13 04:17:51.582652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:51.582665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:51.582677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-13 04:17:51.582700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:51.582719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:17:53.419969 | orchestrator | 2026-02-13 04:17:53.419982 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-13 04:17:53.419995 | orchestrator | Friday 13 February 2026 04:17:52 +0000 (0:00:04.646) 0:01:14.755 ******* 2026-02-13 04:17:53.420006 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:17:53.420025 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:19:16.264678 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:19:16.264805 | orchestrator | 2026-02-13 04:19:16.264852 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-13 04:19:16.264875 | orchestrator | Friday 13 February 2026 04:17:53 +0000 (0:00:00.513) 0:01:15.269 ******* 2026-02-13 04:19:16.264896 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-13 04:19:16.264916 | orchestrator | 2026-02-13 04:19:16.264935 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-13 04:19:16.264954 | orchestrator | Friday 13 February 2026 04:17:55 +0000 (0:00:02.196) 0:01:17.465 ******* 2026-02-13 04:19:16.264966 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-13 04:19:16.264978 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-13 04:19:16.264989 | orchestrator | 2026-02-13 04:19:16.265000 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-13 04:19:16.265010 | orchestrator | Friday 13 February 2026 04:17:57 +0000 (0:00:02.217) 0:01:19.683 ******* 2026-02-13 04:19:16.265021 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265032 | orchestrator | 2026-02-13 04:19:16.265043 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-13 04:19:16.265054 | orchestrator | Friday 13 February 2026 04:18:13 +0000 (0:00:15.349) 0:01:35.033 ******* 2026-02-13 04:19:16.265064 | orchestrator | 2026-02-13 04:19:16.265075 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-13 04:19:16.265109 | orchestrator | Friday 13 February 2026 04:18:13 +0000 (0:00:00.072) 0:01:35.105 ******* 2026-02-13 04:19:16.265121 | orchestrator | 2026-02-13 04:19:16.265131 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-13 04:19:16.265142 | orchestrator | Friday 13 February 2026 04:18:13 +0000 (0:00:00.071) 0:01:35.176 ******* 2026-02-13 04:19:16.265153 | orchestrator | 2026-02-13 04:19:16.265166 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-13 04:19:16.265186 | orchestrator | Friday 13 February 2026 04:18:13 +0000 (0:00:00.073) 0:01:35.249 ******* 2026-02-13 04:19:16.265204 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265223 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265242 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265261 | orchestrator | 2026-02-13 04:19:16.265282 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-13 04:19:16.265301 | orchestrator | Friday 13 February 2026 04:18:22 +0000 (0:00:08.834) 0:01:44.083 ******* 2026-02-13 04:19:16.265322 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265336 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265348 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265361 | orchestrator | 2026-02-13 04:19:16.265374 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-13 04:19:16.265386 | orchestrator | Friday 13 February 2026 04:18:32 +0000 (0:00:10.698) 0:01:54.782 ******* 2026-02-13 04:19:16.265429 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265442 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265455 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265467 | orchestrator | 2026-02-13 04:19:16.265479 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-13 04:19:16.265492 | orchestrator | Friday 13 February 2026 04:18:38 +0000 (0:00:05.752) 0:02:00.535 ******* 2026-02-13 04:19:16.265505 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265517 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265530 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265549 | orchestrator | 2026-02-13 04:19:16.265583 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-13 04:19:16.265602 | orchestrator | Friday 13 February 2026 04:18:47 +0000 (0:00:08.539) 0:02:09.074 ******* 2026-02-13 04:19:16.265621 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265641 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265659 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265677 | orchestrator | 2026-02-13 04:19:16.265688 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-13 04:19:16.265699 | orchestrator | Friday 13 February 2026 04:18:57 +0000 (0:00:10.507) 0:02:19.582 ******* 2026-02-13 04:19:16.265710 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265721 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:19:16.265732 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:19:16.265742 | orchestrator | 2026-02-13 04:19:16.265753 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-13 04:19:16.265764 | orchestrator | Friday 13 February 2026 04:19:08 +0000 (0:00:11.101) 0:02:30.684 ******* 2026-02-13 04:19:16.265775 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:19:16.265785 | orchestrator | 2026-02-13 04:19:16.265796 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:19:16.265808 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:19:16.265822 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:19:16.265833 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:19:16.265843 | orchestrator | 2026-02-13 04:19:16.265864 | orchestrator | 2026-02-13 04:19:16.265875 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:19:16.265886 | orchestrator | Friday 13 February 2026 04:19:15 +0000 (0:00:07.019) 0:02:37.703 ******* 2026-02-13 04:19:16.265897 | orchestrator | =============================================================================== 2026-02-13 04:19:16.265916 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.35s 2026-02-13 04:19:16.265935 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.07s 2026-02-13 04:19:16.265976 | orchestrator | designate : Restart designate-worker container ------------------------- 11.10s 2026-02-13 04:19:16.266005 | orchestrator | designate : Restart designate-api container ---------------------------- 10.70s 2026-02-13 04:19:16.266098 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.51s 2026-02-13 04:19:16.266111 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 8.83s 2026-02-13 04:19:16.266122 | orchestrator | designate : Restart designate-producer container ------------------------ 8.54s 2026-02-13 04:19:16.266132 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.02s 2026-02-13 04:19:16.266143 | orchestrator | designate : Copying over config.json files for services ----------------- 6.49s 2026-02-13 04:19:16.266154 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.20s 2026-02-13 04:19:16.266164 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.88s 2026-02-13 04:19:16.266175 | orchestrator | designate : Restart designate-central container ------------------------- 5.75s 2026-02-13 04:19:16.266185 | orchestrator | designate : Check designate containers ---------------------------------- 4.65s 2026-02-13 04:19:16.266196 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.89s 2026-02-13 04:19:16.266217 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.69s 2026-02-13 04:19:16.266228 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.61s 2026-02-13 04:19:16.266238 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.33s 2026-02-13 04:19:16.266249 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.09s 2026-02-13 04:19:16.266260 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.02s 2026-02-13 04:19:16.266270 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 2.98s 2026-02-13 04:19:18.636337 | orchestrator | 2026-02-13 04:19:18 | INFO  | Task a94ed04c-adb1-48d1-9a77-a80d9de69537 (octavia) was prepared for execution. 2026-02-13 04:19:18.636512 | orchestrator | 2026-02-13 04:19:18 | INFO  | It takes a moment until task a94ed04c-adb1-48d1-9a77-a80d9de69537 (octavia) has been started and output is visible here. 2026-02-13 04:21:24.104729 | orchestrator | 2026-02-13 04:21:24.104869 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:21:24.104886 | orchestrator | 2026-02-13 04:21:24.104898 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:21:24.104910 | orchestrator | Friday 13 February 2026 04:19:22 +0000 (0:00:00.250) 0:00:00.250 ******* 2026-02-13 04:21:24.104922 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:24.104934 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:21:24.104945 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:21:24.104956 | orchestrator | 2026-02-13 04:21:24.104967 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:21:24.104978 | orchestrator | Friday 13 February 2026 04:19:23 +0000 (0:00:00.320) 0:00:00.570 ******* 2026-02-13 04:21:24.104989 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-13 04:21:24.105000 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-13 04:21:24.105011 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-13 04:21:24.105023 | orchestrator | 2026-02-13 04:21:24.105034 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-13 04:21:24.105082 | orchestrator | 2026-02-13 04:21:24.105100 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:21:24.105119 | orchestrator | Friday 13 February 2026 04:19:23 +0000 (0:00:00.436) 0:00:01.007 ******* 2026-02-13 04:21:24.105138 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:21:24.105243 | orchestrator | 2026-02-13 04:21:24.105431 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-13 04:21:24.105457 | orchestrator | Friday 13 February 2026 04:19:24 +0000 (0:00:00.548) 0:00:01.555 ******* 2026-02-13 04:21:24.105478 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-13 04:21:24.105498 | orchestrator | 2026-02-13 04:21:24.105518 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-13 04:21:24.105537 | orchestrator | Friday 13 February 2026 04:19:27 +0000 (0:00:03.445) 0:00:05.000 ******* 2026-02-13 04:21:24.105556 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-13 04:21:24.105576 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-13 04:21:24.105596 | orchestrator | 2026-02-13 04:21:24.105616 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-13 04:21:24.105635 | orchestrator | Friday 13 February 2026 04:19:33 +0000 (0:00:06.502) 0:00:11.502 ******* 2026-02-13 04:21:24.105655 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:21:24.105674 | orchestrator | 2026-02-13 04:21:24.105694 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-13 04:21:24.105714 | orchestrator | Friday 13 February 2026 04:19:37 +0000 (0:00:03.176) 0:00:14.679 ******* 2026-02-13 04:21:24.105734 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:21:24.105754 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-13 04:21:24.105773 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-13 04:21:24.105793 | orchestrator | 2026-02-13 04:21:24.105813 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-13 04:21:24.105832 | orchestrator | Friday 13 February 2026 04:19:45 +0000 (0:00:08.051) 0:00:22.730 ******* 2026-02-13 04:21:24.105852 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:21:24.105871 | orchestrator | 2026-02-13 04:21:24.105908 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-13 04:21:24.105929 | orchestrator | Friday 13 February 2026 04:19:48 +0000 (0:00:03.172) 0:00:25.903 ******* 2026-02-13 04:21:24.105949 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-13 04:21:24.105969 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-13 04:21:24.105988 | orchestrator | 2026-02-13 04:21:24.106008 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-13 04:21:24.106094 | orchestrator | Friday 13 February 2026 04:19:55 +0000 (0:00:07.070) 0:00:32.974 ******* 2026-02-13 04:21:24.106113 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-13 04:21:24.106130 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-13 04:21:24.106149 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-13 04:21:24.106169 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-13 04:21:24.106186 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-13 04:21:24.106204 | orchestrator | 2026-02-13 04:21:24.106222 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:21:24.106240 | orchestrator | Friday 13 February 2026 04:20:10 +0000 (0:00:14.945) 0:00:47.920 ******* 2026-02-13 04:21:24.106252 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:21:24.106277 | orchestrator | 2026-02-13 04:21:24.106287 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-13 04:21:24.106298 | orchestrator | Friday 13 February 2026 04:20:11 +0000 (0:00:00.774) 0:00:48.694 ******* 2026-02-13 04:21:24.106309 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106350 | orchestrator | 2026-02-13 04:21:24.106369 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-13 04:21:24.106381 | orchestrator | Friday 13 February 2026 04:20:15 +0000 (0:00:04.610) 0:00:53.305 ******* 2026-02-13 04:21:24.106392 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106403 | orchestrator | 2026-02-13 04:21:24.106414 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-13 04:21:24.106446 | orchestrator | Friday 13 February 2026 04:20:20 +0000 (0:00:04.283) 0:00:57.589 ******* 2026-02-13 04:21:24.106458 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:24.106468 | orchestrator | 2026-02-13 04:21:24.106479 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-13 04:21:24.106490 | orchestrator | Friday 13 February 2026 04:20:23 +0000 (0:00:03.130) 0:01:00.720 ******* 2026-02-13 04:21:24.106500 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-13 04:21:24.106511 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-13 04:21:24.106522 | orchestrator | 2026-02-13 04:21:24.106532 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-13 04:21:24.106543 | orchestrator | Friday 13 February 2026 04:20:33 +0000 (0:00:10.462) 0:01:11.182 ******* 2026-02-13 04:21:24.106554 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-13 04:21:24.106565 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-13 04:21:24.106577 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-13 04:21:24.106589 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-13 04:21:24.106600 | orchestrator | 2026-02-13 04:21:24.106611 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-13 04:21:24.106626 | orchestrator | Friday 13 February 2026 04:20:48 +0000 (0:00:15.008) 0:01:26.191 ******* 2026-02-13 04:21:24.106637 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106647 | orchestrator | 2026-02-13 04:21:24.106658 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-13 04:21:24.106669 | orchestrator | Friday 13 February 2026 04:20:53 +0000 (0:00:04.755) 0:01:30.947 ******* 2026-02-13 04:21:24.106679 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106690 | orchestrator | 2026-02-13 04:21:24.106701 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-13 04:21:24.106711 | orchestrator | Friday 13 February 2026 04:20:58 +0000 (0:00:05.240) 0:01:36.188 ******* 2026-02-13 04:21:24.106722 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:24.106732 | orchestrator | 2026-02-13 04:21:24.106743 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-13 04:21:24.106754 | orchestrator | Friday 13 February 2026 04:20:58 +0000 (0:00:00.215) 0:01:36.403 ******* 2026-02-13 04:21:24.106778 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:24.106789 | orchestrator | 2026-02-13 04:21:24.106810 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:21:24.106821 | orchestrator | Friday 13 February 2026 04:21:03 +0000 (0:00:04.412) 0:01:40.815 ******* 2026-02-13 04:21:24.106831 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:21:24.106842 | orchestrator | 2026-02-13 04:21:24.106853 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-13 04:21:24.106872 | orchestrator | Friday 13 February 2026 04:21:04 +0000 (0:00:01.100) 0:01:41.916 ******* 2026-02-13 04:21:24.106883 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106893 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.106904 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.106915 | orchestrator | 2026-02-13 04:21:24.106933 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-13 04:21:24.106944 | orchestrator | Friday 13 February 2026 04:21:10 +0000 (0:00:06.233) 0:01:48.150 ******* 2026-02-13 04:21:24.106955 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.106965 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.106976 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.106986 | orchestrator | 2026-02-13 04:21:24.106997 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-13 04:21:24.107008 | orchestrator | Friday 13 February 2026 04:21:15 +0000 (0:00:04.629) 0:01:52.779 ******* 2026-02-13 04:21:24.107018 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.107029 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.107040 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.107050 | orchestrator | 2026-02-13 04:21:24.107061 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-13 04:21:24.107072 | orchestrator | Friday 13 February 2026 04:21:16 +0000 (0:00:01.046) 0:01:53.825 ******* 2026-02-13 04:21:24.107082 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:24.107093 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:21:24.107104 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:21:24.107114 | orchestrator | 2026-02-13 04:21:24.107125 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-13 04:21:24.107136 | orchestrator | Friday 13 February 2026 04:21:18 +0000 (0:00:01.975) 0:01:55.801 ******* 2026-02-13 04:21:24.107147 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.107157 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.107168 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.107178 | orchestrator | 2026-02-13 04:21:24.107189 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-13 04:21:24.107200 | orchestrator | Friday 13 February 2026 04:21:19 +0000 (0:00:01.286) 0:01:57.088 ******* 2026-02-13 04:21:24.107210 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.107221 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.107232 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.107264 | orchestrator | 2026-02-13 04:21:24.107281 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-13 04:21:24.107299 | orchestrator | Friday 13 February 2026 04:21:20 +0000 (0:00:01.226) 0:01:58.314 ******* 2026-02-13 04:21:24.107339 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:24.107359 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:24.107379 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:24.107399 | orchestrator | 2026-02-13 04:21:24.107430 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-13 04:21:48.889365 | orchestrator | Friday 13 February 2026 04:21:24 +0000 (0:00:03.320) 0:02:01.635 ******* 2026-02-13 04:21:48.889461 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:21:48.889472 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:21:48.889479 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:21:48.889486 | orchestrator | 2026-02-13 04:21:48.889494 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-13 04:21:48.889502 | orchestrator | Friday 13 February 2026 04:21:25 +0000 (0:00:01.617) 0:02:03.252 ******* 2026-02-13 04:21:48.889509 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889517 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:21:48.889524 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:21:48.889531 | orchestrator | 2026-02-13 04:21:48.889538 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-13 04:21:48.889545 | orchestrator | Friday 13 February 2026 04:21:26 +0000 (0:00:00.671) 0:02:03.924 ******* 2026-02-13 04:21:48.889573 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889580 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:21:48.889586 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:21:48.889593 | orchestrator | 2026-02-13 04:21:48.889600 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:21:48.889607 | orchestrator | Friday 13 February 2026 04:21:29 +0000 (0:00:03.106) 0:02:07.031 ******* 2026-02-13 04:21:48.889614 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:21:48.889621 | orchestrator | 2026-02-13 04:21:48.889628 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-13 04:21:48.889635 | orchestrator | Friday 13 February 2026 04:21:29 +0000 (0:00:00.511) 0:02:07.543 ******* 2026-02-13 04:21:48.889642 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889648 | orchestrator | 2026-02-13 04:21:48.889655 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-13 04:21:48.889662 | orchestrator | Friday 13 February 2026 04:21:33 +0000 (0:00:03.345) 0:02:10.888 ******* 2026-02-13 04:21:48.889668 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889675 | orchestrator | 2026-02-13 04:21:48.889682 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-13 04:21:48.889688 | orchestrator | Friday 13 February 2026 04:21:36 +0000 (0:00:03.133) 0:02:14.021 ******* 2026-02-13 04:21:48.889695 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-13 04:21:48.889702 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-13 04:21:48.889709 | orchestrator | 2026-02-13 04:21:48.889716 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-13 04:21:48.889723 | orchestrator | Friday 13 February 2026 04:21:43 +0000 (0:00:06.558) 0:02:20.580 ******* 2026-02-13 04:21:48.889729 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889736 | orchestrator | 2026-02-13 04:21:48.889743 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-13 04:21:48.889750 | orchestrator | Friday 13 February 2026 04:21:46 +0000 (0:00:03.375) 0:02:23.955 ******* 2026-02-13 04:21:48.889756 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:21:48.889763 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:21:48.889769 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:21:48.889776 | orchestrator | 2026-02-13 04:21:48.889783 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-13 04:21:48.889790 | orchestrator | Friday 13 February 2026 04:21:46 +0000 (0:00:00.481) 0:02:24.437 ******* 2026-02-13 04:21:48.889811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:48.889834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:48.889851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:48.889860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:48.889867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:48.889878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:48.889886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:48.889896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:48.889914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:50.335239 | orchestrator | 2026-02-13 04:21:50.335248 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-13 04:21:50.335256 | orchestrator | Friday 13 February 2026 04:21:49 +0000 (0:00:02.418) 0:02:26.855 ******* 2026-02-13 04:21:50.335263 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:50.335271 | orchestrator | 2026-02-13 04:21:50.335277 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-13 04:21:50.335283 | orchestrator | Friday 13 February 2026 04:21:49 +0000 (0:00:00.118) 0:02:26.973 ******* 2026-02-13 04:21:50.335289 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:50.335374 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:21:50.335384 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:21:50.335391 | orchestrator | 2026-02-13 04:21:50.335397 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-13 04:21:50.335403 | orchestrator | Friday 13 February 2026 04:21:49 +0000 (0:00:00.262) 0:02:27.236 ******* 2026-02-13 04:21:50.335411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:50.335420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:50.335433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:50.335440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:50.335454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:50.335461 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:21:50.335474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:55.369459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:55.369571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:55.369606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:55.369649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:55.369672 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:55.369694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:55.369715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:55.369761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:55.369828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:55.369848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:55.369871 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:21:55.369885 | orchestrator | 2026-02-13 04:21:55.369899 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:21:55.369913 | orchestrator | Friday 13 February 2026 04:21:50 +0000 (0:00:00.728) 0:02:27.964 ******* 2026-02-13 04:21:55.369926 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:21:55.369938 | orchestrator | 2026-02-13 04:21:55.369951 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-13 04:21:55.369964 | orchestrator | Friday 13 February 2026 04:21:51 +0000 (0:00:00.766) 0:02:28.731 ******* 2026-02-13 04:21:55.369976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:55.369993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:55.370095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:21:56.842119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:56.842263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:56.842280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:21:56.842293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:21:56.842492 | orchestrator | 2026-02-13 04:21:56.842506 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-13 04:21:56.842519 | orchestrator | Friday 13 February 2026 04:21:56 +0000 (0:00:05.161) 0:02:33.892 ******* 2026-02-13 04:21:56.842539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:56.941526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:56.941623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:56.941635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:56.941644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:56.941651 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:56.941660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:56.941668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:56.941705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:56.941716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:56.941723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:56.941729 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:21:56.941736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:56.941742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:56.941749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:56.941766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:57.563127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:57.563230 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:21:57.563246 | orchestrator | 2026-02-13 04:21:57.563260 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-13 04:21:57.563272 | orchestrator | Friday 13 February 2026 04:21:56 +0000 (0:00:00.589) 0:02:34.482 ******* 2026-02-13 04:21:57.563285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:57.563331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:57.563346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:57.563383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:57.563413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:57.563425 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:21:57.563452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:21:57.563465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:21:57.563476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:21:57.563487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:21:57.563506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:21:57.563517 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:21:57.563540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 04:22:02.099242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 04:22:02.099425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 04:22:02.099451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 04:22:02.099469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 04:22:02.099511 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:22:02.099525 | orchestrator | 2026-02-13 04:22:02.099535 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-13 04:22:02.099545 | orchestrator | Friday 13 February 2026 04:21:57 +0000 (0:00:00.996) 0:02:35.479 ******* 2026-02-13 04:22:02.099554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:02.099597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:02.099607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:02.099616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:02.099624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:02.099639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:02.099647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:02.099665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:17.861897 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:17.862280 | orchestrator | 2026-02-13 04:22:17.862327 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-13 04:22:17.862348 | orchestrator | Friday 13 February 2026 04:22:03 +0000 (0:00:05.136) 0:02:40.615 ******* 2026-02-13 04:22:17.862366 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-13 04:22:17.862386 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-13 04:22:17.862403 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-13 04:22:17.862419 | orchestrator | 2026-02-13 04:22:17.862438 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-13 04:22:17.862457 | orchestrator | Friday 13 February 2026 04:22:04 +0000 (0:00:01.628) 0:02:42.244 ******* 2026-02-13 04:22:17.862477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:17.862513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:17.862535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:17.862575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:32.689271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:32.689486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:32.689530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:22:32.689682 | orchestrator | 2026-02-13 04:22:32.689696 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-13 04:22:32.689708 | orchestrator | Friday 13 February 2026 04:22:20 +0000 (0:00:16.195) 0:02:58.439 ******* 2026-02-13 04:22:32.689720 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:22:32.689733 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:22:32.689744 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:22:32.689755 | orchestrator | 2026-02-13 04:22:32.689766 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-13 04:22:32.689778 | orchestrator | Friday 13 February 2026 04:22:22 +0000 (0:00:01.728) 0:03:00.168 ******* 2026-02-13 04:22:32.689788 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-13 04:22:32.689800 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-13 04:22:32.689812 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-13 04:22:32.689825 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-13 04:22:32.689838 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-13 04:22:32.689851 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-13 04:22:32.689863 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-13 04:22:32.689876 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-13 04:22:32.689888 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-13 04:22:32.689900 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-13 04:22:32.689913 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-13 04:22:32.689933 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-13 04:22:32.689950 | orchestrator | 2026-02-13 04:22:32.689977 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-13 04:22:32.689996 | orchestrator | Friday 13 February 2026 04:22:27 +0000 (0:00:04.963) 0:03:05.132 ******* 2026-02-13 04:22:32.690014 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-13 04:22:32.690103 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-13 04:22:32.690136 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-13 04:22:41.288935 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289080 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289096 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289107 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289119 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289130 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289141 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289178 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289191 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289202 | orchestrator | 2026-02-13 04:22:41.289215 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-13 04:22:41.289228 | orchestrator | Friday 13 February 2026 04:22:32 +0000 (0:00:05.087) 0:03:10.219 ******* 2026-02-13 04:22:41.289239 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-13 04:22:41.289250 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-13 04:22:41.289304 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-13 04:22:41.289316 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289328 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289339 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-13 04:22:41.289349 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289360 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289371 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-13 04:22:41.289382 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289393 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289404 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-13 04:22:41.289415 | orchestrator | 2026-02-13 04:22:41.289426 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-13 04:22:41.289437 | orchestrator | Friday 13 February 2026 04:22:37 +0000 (0:00:05.240) 0:03:15.459 ******* 2026-02-13 04:22:41.289452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:41.289470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:41.289553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 04:22:41.289571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:41.289600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:41.289611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-13 04:22:41.289623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:41.289636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:41.289661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-13 04:22:41.289680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:01.514352 | orchestrator | 2026-02-13 04:24:01.514361 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-13 04:24:01.514370 | orchestrator | Friday 13 February 2026 04:22:41 +0000 (0:00:04.029) 0:03:19.489 ******* 2026-02-13 04:24:01.514377 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:01.514394 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:01.514402 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:01.514410 | orchestrator | 2026-02-13 04:24:01.514417 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-13 04:24:01.514425 | orchestrator | Friday 13 February 2026 04:22:42 +0000 (0:00:00.526) 0:03:20.016 ******* 2026-02-13 04:24:01.514433 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514440 | orchestrator | 2026-02-13 04:24:01.514447 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-13 04:24:01.514455 | orchestrator | Friday 13 February 2026 04:22:44 +0000 (0:00:02.092) 0:03:22.108 ******* 2026-02-13 04:24:01.514463 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514470 | orchestrator | 2026-02-13 04:24:01.514478 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-13 04:24:01.514486 | orchestrator | Friday 13 February 2026 04:22:46 +0000 (0:00:02.130) 0:03:24.239 ******* 2026-02-13 04:24:01.514495 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514503 | orchestrator | 2026-02-13 04:24:01.514511 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-13 04:24:01.514520 | orchestrator | Friday 13 February 2026 04:22:48 +0000 (0:00:02.147) 0:03:26.386 ******* 2026-02-13 04:24:01.514539 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514548 | orchestrator | 2026-02-13 04:24:01.514556 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-13 04:24:01.514564 | orchestrator | Friday 13 February 2026 04:22:51 +0000 (0:00:02.272) 0:03:28.658 ******* 2026-02-13 04:24:01.514572 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514580 | orchestrator | 2026-02-13 04:24:01.514588 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-13 04:24:01.514595 | orchestrator | Friday 13 February 2026 04:23:12 +0000 (0:00:21.758) 0:03:50.417 ******* 2026-02-13 04:24:01.514602 | orchestrator | 2026-02-13 04:24:01.514610 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-13 04:24:01.514618 | orchestrator | Friday 13 February 2026 04:23:12 +0000 (0:00:00.068) 0:03:50.485 ******* 2026-02-13 04:24:01.514626 | orchestrator | 2026-02-13 04:24:01.514634 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-13 04:24:01.514641 | orchestrator | Friday 13 February 2026 04:23:13 +0000 (0:00:00.081) 0:03:50.567 ******* 2026-02-13 04:24:01.514649 | orchestrator | 2026-02-13 04:24:01.514657 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-13 04:24:01.514664 | orchestrator | Friday 13 February 2026 04:23:13 +0000 (0:00:00.071) 0:03:50.639 ******* 2026-02-13 04:24:01.514672 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514680 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:24:01.514687 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:24:01.514695 | orchestrator | 2026-02-13 04:24:01.514703 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-13 04:24:01.514719 | orchestrator | Friday 13 February 2026 04:23:25 +0000 (0:00:12.518) 0:04:03.157 ******* 2026-02-13 04:24:01.514727 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514735 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:24:01.514742 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:24:01.514749 | orchestrator | 2026-02-13 04:24:01.514757 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-13 04:24:01.514765 | orchestrator | Friday 13 February 2026 04:23:37 +0000 (0:00:11.817) 0:04:14.974 ******* 2026-02-13 04:24:01.514773 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:24:01.514781 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:24:01.514789 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514797 | orchestrator | 2026-02-13 04:24:01.514805 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-13 04:24:01.514813 | orchestrator | Friday 13 February 2026 04:23:45 +0000 (0:00:08.176) 0:04:23.151 ******* 2026-02-13 04:24:01.514820 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514829 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:24:01.514836 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:24:01.514844 | orchestrator | 2026-02-13 04:24:01.514852 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-13 04:24:01.514860 | orchestrator | Friday 13 February 2026 04:23:51 +0000 (0:00:05.600) 0:04:28.751 ******* 2026-02-13 04:24:01.514868 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:24:01.514876 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:24:01.514883 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:24:01.514891 | orchestrator | 2026-02-13 04:24:01.514899 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:24:01.514908 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:24:01.514918 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:24:01.514926 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:24:01.514934 | orchestrator | 2026-02-13 04:24:01.514942 | orchestrator | 2026-02-13 04:24:01.514950 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:24:01.514958 | orchestrator | Friday 13 February 2026 04:24:01 +0000 (0:00:10.274) 0:04:39.026 ******* 2026-02-13 04:24:01.514966 | orchestrator | =============================================================================== 2026-02-13 04:24:01.514974 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.76s 2026-02-13 04:24:01.514981 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.20s 2026-02-13 04:24:01.514989 | orchestrator | octavia : Add rules for security groups -------------------------------- 15.01s 2026-02-13 04:24:01.514997 | orchestrator | octavia : Adding octavia related roles --------------------------------- 14.95s 2026-02-13 04:24:01.515010 | orchestrator | octavia : Restart octavia-api container -------------------------------- 12.52s 2026-02-13 04:24:01.515018 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.82s 2026-02-13 04:24:01.515026 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.46s 2026-02-13 04:24:01.515034 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.27s 2026-02-13 04:24:01.515042 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.18s 2026-02-13 04:24:01.515050 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.05s 2026-02-13 04:24:01.515057 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.07s 2026-02-13 04:24:01.515065 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.56s 2026-02-13 04:24:01.515077 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.50s 2026-02-13 04:24:01.515085 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.23s 2026-02-13 04:24:01.515100 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.60s 2026-02-13 04:24:01.831914 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.24s 2026-02-13 04:24:01.832005 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.24s 2026-02-13 04:24:01.832019 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.16s 2026-02-13 04:24:01.832028 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.14s 2026-02-13 04:24:01.832037 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.09s 2026-02-13 04:24:04.168102 | orchestrator | 2026-02-13 04:24:04 | INFO  | Task 96e3cf84-6577-44c8-8314-2b1fc90ee888 (ceilometer) was prepared for execution. 2026-02-13 04:24:04.168207 | orchestrator | 2026-02-13 04:24:04 | INFO  | It takes a moment until task 96e3cf84-6577-44c8-8314-2b1fc90ee888 (ceilometer) has been started and output is visible here. 2026-02-13 04:24:26.319078 | orchestrator | 2026-02-13 04:24:26.319186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:24:26.319201 | orchestrator | 2026-02-13 04:24:26.319211 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:24:26.319222 | orchestrator | Friday 13 February 2026 04:24:08 +0000 (0:00:00.274) 0:00:00.274 ******* 2026-02-13 04:24:26.319276 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:24:26.319289 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:24:26.319299 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:24:26.319309 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:24:26.319319 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:24:26.319329 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:24:26.319338 | orchestrator | 2026-02-13 04:24:26.319348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:24:26.319358 | orchestrator | Friday 13 February 2026 04:24:09 +0000 (0:00:00.736) 0:00:01.011 ******* 2026-02-13 04:24:26.319369 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319379 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319389 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319399 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319409 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319418 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-13 04:24:26.319428 | orchestrator | 2026-02-13 04:24:26.319438 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-13 04:24:26.319447 | orchestrator | 2026-02-13 04:24:26.319457 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-13 04:24:26.319467 | orchestrator | Friday 13 February 2026 04:24:09 +0000 (0:00:00.619) 0:00:01.630 ******* 2026-02-13 04:24:26.319478 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:24:26.319489 | orchestrator | 2026-02-13 04:24:26.319499 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-13 04:24:26.319508 | orchestrator | Friday 13 February 2026 04:24:10 +0000 (0:00:01.186) 0:00:02.816 ******* 2026-02-13 04:24:26.319519 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:26.319528 | orchestrator | 2026-02-13 04:24:26.319538 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-13 04:24:26.319548 | orchestrator | Friday 13 February 2026 04:24:10 +0000 (0:00:00.114) 0:00:02.931 ******* 2026-02-13 04:24:26.319558 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:26.319568 | orchestrator | 2026-02-13 04:24:26.319600 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-13 04:24:26.319611 | orchestrator | Friday 13 February 2026 04:24:11 +0000 (0:00:00.138) 0:00:03.070 ******* 2026-02-13 04:24:26.319621 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:24:26.319632 | orchestrator | 2026-02-13 04:24:26.319643 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-13 04:24:26.319655 | orchestrator | Friday 13 February 2026 04:24:14 +0000 (0:00:03.333) 0:00:06.403 ******* 2026-02-13 04:24:26.319666 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:24:26.319678 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-13 04:24:26.319689 | orchestrator | 2026-02-13 04:24:26.319700 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-13 04:24:26.319711 | orchestrator | Friday 13 February 2026 04:24:17 +0000 (0:00:03.311) 0:00:09.715 ******* 2026-02-13 04:24:26.319723 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:24:26.319734 | orchestrator | 2026-02-13 04:24:26.319759 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-13 04:24:26.319771 | orchestrator | Friday 13 February 2026 04:24:20 +0000 (0:00:03.134) 0:00:12.850 ******* 2026-02-13 04:24:26.319785 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-13 04:24:26.319802 | orchestrator | 2026-02-13 04:24:26.319819 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-13 04:24:26.319835 | orchestrator | Friday 13 February 2026 04:24:24 +0000 (0:00:03.858) 0:00:16.708 ******* 2026-02-13 04:24:26.319853 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:26.319878 | orchestrator | 2026-02-13 04:24:26.319896 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-13 04:24:26.319913 | orchestrator | Friday 13 February 2026 04:24:24 +0000 (0:00:00.133) 0:00:16.842 ******* 2026-02-13 04:24:26.319935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:26.319982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:26.320002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:26.320020 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:26.320051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:26.320072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:26.320090 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:26.320121 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:30.994390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:30.994570 | orchestrator | 2026-02-13 04:24:30.994593 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-13 04:24:30.994607 | orchestrator | Friday 13 February 2026 04:24:26 +0000 (0:00:01.441) 0:00:18.284 ******* 2026-02-13 04:24:30.994619 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:24:30.994630 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:24:30.994641 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:24:30.994652 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:24:30.994663 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:24:30.994674 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:24:30.994684 | orchestrator | 2026-02-13 04:24:30.994696 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-13 04:24:30.994708 | orchestrator | Friday 13 February 2026 04:24:27 +0000 (0:00:01.569) 0:00:19.854 ******* 2026-02-13 04:24:30.994719 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:24:30.994730 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:24:30.994741 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:24:30.994751 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:24:30.994762 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:24:30.994772 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:24:30.994783 | orchestrator | 2026-02-13 04:24:30.994793 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-13 04:24:30.994804 | orchestrator | Friday 13 February 2026 04:24:28 +0000 (0:00:00.619) 0:00:20.473 ******* 2026-02-13 04:24:30.994815 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:30.994825 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:30.994837 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:30.994848 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:30.994859 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:30.994871 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:30.994883 | orchestrator | 2026-02-13 04:24:30.994914 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-13 04:24:30.994939 | orchestrator | Friday 13 February 2026 04:24:29 +0000 (0:00:00.756) 0:00:21.229 ******* 2026-02-13 04:24:30.994951 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:24:30.994963 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:24:30.994975 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:24:30.994987 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:24:30.994999 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:24:30.995060 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:24:30.995073 | orchestrator | 2026-02-13 04:24:30.995101 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-13 04:24:30.995123 | orchestrator | Friday 13 February 2026 04:24:29 +0000 (0:00:00.668) 0:00:21.897 ******* 2026-02-13 04:24:30.995147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:30.995170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:30.995213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:30.995251 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:30.995265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:30.995276 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:30.995288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:30.995300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:30.995317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:30.995329 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:30.995340 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:30.995352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:30.995371 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:30.995391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554148 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:35.554272 | orchestrator | 2026-02-13 04:24:35.554284 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-13 04:24:35.554292 | orchestrator | Friday 13 February 2026 04:24:30 +0000 (0:00:01.060) 0:00:22.958 ******* 2026-02-13 04:24:35.554300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:35.554316 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:35.554335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:35.554364 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:35.554370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:35.554382 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:35.554400 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554407 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:35.554413 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554419 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:35.554428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:35.554434 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:35.554445 | orchestrator | 2026-02-13 04:24:35.554452 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-13 04:24:35.554459 | orchestrator | Friday 13 February 2026 04:24:31 +0000 (0:00:00.824) 0:00:23.782 ******* 2026-02-13 04:24:35.554466 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:24:35.554472 | orchestrator | 2026-02-13 04:24:35.554478 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-13 04:24:35.554484 | orchestrator | Friday 13 February 2026 04:24:32 +0000 (0:00:00.678) 0:00:24.461 ******* 2026-02-13 04:24:35.554490 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:24:35.554497 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:24:35.554503 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:24:35.554509 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:24:35.554514 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:24:35.554520 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:24:35.554526 | orchestrator | 2026-02-13 04:24:35.554532 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-13 04:24:35.554537 | orchestrator | Friday 13 February 2026 04:24:33 +0000 (0:00:00.829) 0:00:25.290 ******* 2026-02-13 04:24:35.554543 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:24:35.554549 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:24:35.554555 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:24:35.554560 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:24:35.554566 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:24:35.554572 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:24:35.554577 | orchestrator | 2026-02-13 04:24:35.554583 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-13 04:24:35.554589 | orchestrator | Friday 13 February 2026 04:24:34 +0000 (0:00:00.912) 0:00:26.203 ******* 2026-02-13 04:24:35.554595 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:35.554601 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:35.554607 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:35.554612 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:35.554618 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:35.554624 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:35.554630 | orchestrator | 2026-02-13 04:24:35.554636 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-13 04:24:35.554641 | orchestrator | Friday 13 February 2026 04:24:34 +0000 (0:00:00.751) 0:00:26.955 ******* 2026-02-13 04:24:35.554647 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:35.554654 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:35.554659 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:35.554665 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:35.554671 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:35.554677 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:35.554683 | orchestrator | 2026-02-13 04:24:40.409088 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-13 04:24:40.409195 | orchestrator | Friday 13 February 2026 04:24:35 +0000 (0:00:00.572) 0:00:27.527 ******* 2026-02-13 04:24:40.409213 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:24:40.409266 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:24:40.409276 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:24:40.409284 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:24:40.409291 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:24:40.409298 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:24:40.409306 | orchestrator | 2026-02-13 04:24:40.409315 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-13 04:24:40.409322 | orchestrator | Friday 13 February 2026 04:24:36 +0000 (0:00:01.444) 0:00:28.971 ******* 2026-02-13 04:24:40.409332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:40.409375 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:40.409395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:40.409411 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:40.409419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:40.409459 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:40.409471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409493 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:40.409506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409519 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:40.409538 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:40.409550 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:40.409563 | orchestrator | 2026-02-13 04:24:40.409571 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-13 04:24:40.409578 | orchestrator | Friday 13 February 2026 04:24:37 +0000 (0:00:00.787) 0:00:29.759 ******* 2026-02-13 04:24:40.409585 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:40.409593 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:40.409624 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:40.409634 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:40.409642 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:40.409651 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:40.409659 | orchestrator | 2026-02-13 04:24:40.409667 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-13 04:24:40.409676 | orchestrator | Friday 13 February 2026 04:24:38 +0000 (0:00:00.823) 0:00:30.583 ******* 2026-02-13 04:24:40.409684 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:24:40.409692 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:24:40.409701 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:24:40.409709 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:24:40.409718 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:24:40.409726 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:24:40.409734 | orchestrator | 2026-02-13 04:24:40.409743 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-13 04:24:40.409752 | orchestrator | Friday 13 February 2026 04:24:39 +0000 (0:00:01.353) 0:00:31.936 ******* 2026-02-13 04:24:40.409769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:46.178602 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:46.178621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:46.178670 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:46.178681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:46.178705 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:46.178716 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178750 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:46.178779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178789 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:46.178799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.178810 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:46.178821 | orchestrator | 2026-02-13 04:24:46.178833 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-13 04:24:46.178843 | orchestrator | Friday 13 February 2026 04:24:41 +0000 (0:00:01.093) 0:00:33.030 ******* 2026-02-13 04:24:46.178853 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:46.178862 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:46.178872 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:46.178887 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:46.178898 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:46.178908 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:46.178919 | orchestrator | 2026-02-13 04:24:46.178929 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-13 04:24:46.178938 | orchestrator | Friday 13 February 2026 04:24:41 +0000 (0:00:00.748) 0:00:33.778 ******* 2026-02-13 04:24:46.178948 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:46.178957 | orchestrator | 2026-02-13 04:24:46.178966 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-13 04:24:46.178976 | orchestrator | Friday 13 February 2026 04:24:41 +0000 (0:00:00.141) 0:00:33.919 ******* 2026-02-13 04:24:46.178987 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:46.178998 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:46.179008 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:46.179017 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:46.179027 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:46.179038 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:46.179048 | orchestrator | 2026-02-13 04:24:46.179058 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-13 04:24:46.179078 | orchestrator | Friday 13 February 2026 04:24:42 +0000 (0:00:00.590) 0:00:34.510 ******* 2026-02-13 04:24:46.179090 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:24:46.179103 | orchestrator | 2026-02-13 04:24:46.179114 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-13 04:24:46.179124 | orchestrator | Friday 13 February 2026 04:24:43 +0000 (0:00:01.283) 0:00:35.793 ******* 2026-02-13 04:24:46.179135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.179156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.680209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.680428 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.680469 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.680504 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:46.680517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:46.680530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:46.680561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:46.680574 | orchestrator | 2026-02-13 04:24:46.680588 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-13 04:24:46.680601 | orchestrator | Friday 13 February 2026 04:24:46 +0000 (0:00:02.353) 0:00:38.147 ******* 2026-02-13 04:24:46.680614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.680642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:46.680676 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:46.680697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.680715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:46.680735 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:46.680755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:46.680787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:48.528482 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:48.528636 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528658 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:48.528689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528755 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:48.528770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528782 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:48.528794 | orchestrator | 2026-02-13 04:24:48.528807 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-13 04:24:48.528819 | orchestrator | Friday 13 February 2026 04:24:47 +0000 (0:00:00.835) 0:00:38.982 ******* 2026-02-13 04:24:48.528831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:48.528878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:48.528920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:24:48.528947 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:24:48.528961 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:24:48.528974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.528989 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:24:48.529009 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:24:48.529029 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:48.529061 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:24:48.529096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:24:55.797185 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:24:55.797370 | orchestrator | 2026-02-13 04:24:55.797389 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-13 04:24:55.797402 | orchestrator | Friday 13 February 2026 04:24:48 +0000 (0:00:01.510) 0:00:40.493 ******* 2026-02-13 04:24:55.797432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797472 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797484 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:55.797569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:55.797581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:24:55.797592 | orchestrator | 2026-02-13 04:24:55.797603 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-13 04:24:55.797614 | orchestrator | Friday 13 February 2026 04:24:51 +0000 (0:00:02.646) 0:00:43.139 ******* 2026-02-13 04:24:55.797626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:24:55.797655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.017763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.017876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.017892 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.017905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.017918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.017952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.017965 | orchestrator | 2026-02-13 04:25:05.017979 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-13 04:25:05.018010 | orchestrator | Friday 13 February 2026 04:24:55 +0000 (0:00:04.627) 0:00:47.767 ******* 2026-02-13 04:25:05.018077 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:25:05.018090 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:25:05.018101 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:25:05.018112 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:25:05.018123 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:25:05.018134 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:25:05.018145 | orchestrator | 2026-02-13 04:25:05.018156 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-13 04:25:05.018176 | orchestrator | Friday 13 February 2026 04:24:57 +0000 (0:00:01.430) 0:00:49.198 ******* 2026-02-13 04:25:05.018188 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:25:05.018198 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:25:05.018241 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:25:05.018253 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:05.018264 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:05.018275 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:05.018288 | orchestrator | 2026-02-13 04:25:05.018301 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-13 04:25:05.018315 | orchestrator | Friday 13 February 2026 04:24:57 +0000 (0:00:00.566) 0:00:49.765 ******* 2026-02-13 04:25:05.018329 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:05.018341 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:05.018355 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:05.018367 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:25:05.018380 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:25:05.018393 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:25:05.018406 | orchestrator | 2026-02-13 04:25:05.018419 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-13 04:25:05.018431 | orchestrator | Friday 13 February 2026 04:24:59 +0000 (0:00:01.651) 0:00:51.417 ******* 2026-02-13 04:25:05.018444 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:05.018456 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:05.018469 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:05.018482 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:25:05.018494 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:25:05.018507 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:25:05.018520 | orchestrator | 2026-02-13 04:25:05.018531 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-13 04:25:05.018542 | orchestrator | Friday 13 February 2026 04:25:00 +0000 (0:00:01.449) 0:00:52.866 ******* 2026-02-13 04:25:05.018552 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:25:05.018574 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:25:05.018585 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:25:05.018596 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:25:05.018606 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:25:05.018617 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:25:05.018628 | orchestrator | 2026-02-13 04:25:05.018648 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-13 04:25:05.018659 | orchestrator | Friday 13 February 2026 04:25:02 +0000 (0:00:01.594) 0:00:54.461 ******* 2026-02-13 04:25:05.018671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.018684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.018696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.018722 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.866656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.866762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:05.866802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.866816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.866828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:05.866840 | orchestrator | 2026-02-13 04:25:05.866853 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-13 04:25:05.866866 | orchestrator | Friday 13 February 2026 04:25:05 +0000 (0:00:02.522) 0:00:56.983 ******* 2026-02-13 04:25:05.866891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:05.866922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:05.866936 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:25:05.866948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:05.866968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:05.866979 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:25:05.866990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:05.867002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:05.867013 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:25:05.867024 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:05.867036 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:05.867059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.254865 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:09.254986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255006 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:09.255018 | orchestrator | 2026-02-13 04:25:09.255031 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-13 04:25:09.255043 | orchestrator | Friday 13 February 2026 04:25:05 +0000 (0:00:00.853) 0:00:57.837 ******* 2026-02-13 04:25:09.255053 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:25:09.255064 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:25:09.255075 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:25:09.255085 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:09.255096 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:09.255107 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:09.255117 | orchestrator | 2026-02-13 04:25:09.255128 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-13 04:25:09.255140 | orchestrator | Friday 13 February 2026 04:25:06 +0000 (0:00:00.747) 0:00:58.585 ******* 2026-02-13 04:25:09.255152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:09.255178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:09.255334 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:25:09.255372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 04:25:09.255399 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:25:09.255412 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:25:09.255425 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255440 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:09.255453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255466 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:09.255486 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-13 04:25:09.255507 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:09.255520 | orchestrator | 2026-02-13 04:25:09.255533 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-13 04:25:09.255546 | orchestrator | Friday 13 February 2026 04:25:07 +0000 (0:00:00.899) 0:00:59.484 ******* 2026-02-13 04:25:09.255577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219856 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-13 04:25:41.219868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:41.219891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:41.219901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-13 04:25:41.219911 | orchestrator | 2026-02-13 04:25:41.219922 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-13 04:25:41.219932 | orchestrator | Friday 13 February 2026 04:25:09 +0000 (0:00:01.739) 0:01:01.223 ******* 2026-02-13 04:25:41.219941 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:25:41.219951 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:25:41.219960 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:25:41.219968 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:25:41.219977 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:25:41.219985 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:25:41.219994 | orchestrator | 2026-02-13 04:25:41.220002 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-13 04:25:41.220011 | orchestrator | Friday 13 February 2026 04:25:09 +0000 (0:00:00.603) 0:01:01.827 ******* 2026-02-13 04:25:41.220020 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:25:41.220028 | orchestrator | 2026-02-13 04:25:41.220037 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220046 | orchestrator | Friday 13 February 2026 04:25:14 +0000 (0:00:04.650) 0:01:06.477 ******* 2026-02-13 04:25:41.220054 | orchestrator | 2026-02-13 04:25:41.220063 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220071 | orchestrator | Friday 13 February 2026 04:25:14 +0000 (0:00:00.074) 0:01:06.551 ******* 2026-02-13 04:25:41.220088 | orchestrator | 2026-02-13 04:25:41.220096 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220105 | orchestrator | Friday 13 February 2026 04:25:14 +0000 (0:00:00.077) 0:01:06.629 ******* 2026-02-13 04:25:41.220113 | orchestrator | 2026-02-13 04:25:41.220164 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220175 | orchestrator | Friday 13 February 2026 04:25:14 +0000 (0:00:00.245) 0:01:06.875 ******* 2026-02-13 04:25:41.220183 | orchestrator | 2026-02-13 04:25:41.220192 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220258 | orchestrator | Friday 13 February 2026 04:25:14 +0000 (0:00:00.074) 0:01:06.949 ******* 2026-02-13 04:25:41.220275 | orchestrator | 2026-02-13 04:25:41.220291 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-13 04:25:41.220306 | orchestrator | Friday 13 February 2026 04:25:15 +0000 (0:00:00.077) 0:01:07.027 ******* 2026-02-13 04:25:41.220319 | orchestrator | 2026-02-13 04:25:41.220330 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-13 04:25:41.220340 | orchestrator | Friday 13 February 2026 04:25:15 +0000 (0:00:00.074) 0:01:07.102 ******* 2026-02-13 04:25:41.220349 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:25:41.220360 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:25:41.220370 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:25:41.220381 | orchestrator | 2026-02-13 04:25:41.220395 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-13 04:25:41.220427 | orchestrator | Friday 13 February 2026 04:25:25 +0000 (0:00:10.321) 0:01:17.423 ******* 2026-02-13 04:25:41.220442 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:25:41.220457 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:25:41.220469 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:25:41.220483 | orchestrator | 2026-02-13 04:25:41.220498 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-13 04:25:41.220513 | orchestrator | Friday 13 February 2026 04:25:30 +0000 (0:00:04.845) 0:01:22.268 ******* 2026-02-13 04:25:41.220528 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:25:41.220542 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:25:41.220556 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:25:41.220570 | orchestrator | 2026-02-13 04:25:41.220578 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:25:41.220594 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-13 04:25:41.220610 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 04:25:41.220637 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 04:25:41.669193 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-13 04:25:41.669321 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-13 04:25:41.669338 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-13 04:25:41.669351 | orchestrator | 2026-02-13 04:25:41.669364 | orchestrator | 2026-02-13 04:25:41.669375 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:25:41.669387 | orchestrator | Friday 13 February 2026 04:25:41 +0000 (0:00:10.909) 0:01:33.178 ******* 2026-02-13 04:25:41.669398 | orchestrator | =============================================================================== 2026-02-13 04:25:41.669435 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 10.91s 2026-02-13 04:25:41.669447 | orchestrator | ceilometer : Restart ceilometer-notification container ----------------- 10.32s 2026-02-13 04:25:41.669458 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 4.85s 2026-02-13 04:25:41.669469 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.65s 2026-02-13 04:25:41.669479 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.63s 2026-02-13 04:25:41.669490 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 3.86s 2026-02-13 04:25:41.669501 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.33s 2026-02-13 04:25:41.669512 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.31s 2026-02-13 04:25:41.669522 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.13s 2026-02-13 04:25:41.669533 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.65s 2026-02-13 04:25:41.669544 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.52s 2026-02-13 04:25:41.669554 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.35s 2026-02-13 04:25:41.669565 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.74s 2026-02-13 04:25:41.669576 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.65s 2026-02-13 04:25:41.669587 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.59s 2026-02-13 04:25:41.669598 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.57s 2026-02-13 04:25:41.669609 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.51s 2026-02-13 04:25:41.669620 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.45s 2026-02-13 04:25:41.669630 | orchestrator | ceilometer : Check if custom polling.yaml exists ------------------------ 1.44s 2026-02-13 04:25:41.669641 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.44s 2026-02-13 04:25:44.017239 | orchestrator | 2026-02-13 04:25:44 | INFO  | Task 121ec514-79ac-4a5b-a187-707460bc3445 (aodh) was prepared for execution. 2026-02-13 04:25:44.017372 | orchestrator | 2026-02-13 04:25:44 | INFO  | It takes a moment until task 121ec514-79ac-4a5b-a187-707460bc3445 (aodh) has been started and output is visible here. 2026-02-13 04:26:15.266095 | orchestrator | 2026-02-13 04:26:15.266252 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:26:15.266276 | orchestrator | 2026-02-13 04:26:15.266288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:26:15.266299 | orchestrator | Friday 13 February 2026 04:25:48 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-13 04:26:15.266311 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:26:15.266323 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:26:15.266334 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:26:15.266346 | orchestrator | 2026-02-13 04:26:15.266357 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:26:15.266385 | orchestrator | Friday 13 February 2026 04:25:48 +0000 (0:00:00.321) 0:00:00.579 ******* 2026-02-13 04:26:15.266397 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-13 04:26:15.266407 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-13 04:26:15.266414 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-13 04:26:15.266421 | orchestrator | 2026-02-13 04:26:15.266428 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-13 04:26:15.266435 | orchestrator | 2026-02-13 04:26:15.266442 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-13 04:26:15.266449 | orchestrator | Friday 13 February 2026 04:25:48 +0000 (0:00:00.456) 0:00:01.035 ******* 2026-02-13 04:26:15.266456 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:26:15.266482 | orchestrator | 2026-02-13 04:26:15.266489 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-13 04:26:15.266495 | orchestrator | Friday 13 February 2026 04:25:49 +0000 (0:00:00.560) 0:00:01.596 ******* 2026-02-13 04:26:15.266502 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-13 04:26:15.266509 | orchestrator | 2026-02-13 04:26:15.266516 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-13 04:26:15.266522 | orchestrator | Friday 13 February 2026 04:25:52 +0000 (0:00:03.408) 0:00:05.004 ******* 2026-02-13 04:26:15.266529 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-13 04:26:15.266536 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-13 04:26:15.266543 | orchestrator | 2026-02-13 04:26:15.266549 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-13 04:26:15.266556 | orchestrator | Friday 13 February 2026 04:25:59 +0000 (0:00:06.368) 0:00:11.372 ******* 2026-02-13 04:26:15.266563 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:26:15.266570 | orchestrator | 2026-02-13 04:26:15.266577 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-13 04:26:15.266584 | orchestrator | Friday 13 February 2026 04:26:02 +0000 (0:00:03.198) 0:00:14.570 ******* 2026-02-13 04:26:15.266590 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:26:15.266597 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-13 04:26:15.266603 | orchestrator | 2026-02-13 04:26:15.266610 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-13 04:26:15.266616 | orchestrator | Friday 13 February 2026 04:26:06 +0000 (0:00:03.698) 0:00:18.269 ******* 2026-02-13 04:26:15.266623 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:26:15.266630 | orchestrator | 2026-02-13 04:26:15.266636 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-13 04:26:15.266643 | orchestrator | Friday 13 February 2026 04:26:09 +0000 (0:00:03.227) 0:00:21.497 ******* 2026-02-13 04:26:15.266649 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-13 04:26:15.266656 | orchestrator | 2026-02-13 04:26:15.266662 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-13 04:26:15.266669 | orchestrator | Friday 13 February 2026 04:26:13 +0000 (0:00:03.701) 0:00:25.198 ******* 2026-02-13 04:26:15.266679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:15.266705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:15.266723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:15.266731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:15.266739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:15.266746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:15.266753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:15.266765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:16.573966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:16.574230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:16.574264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:16.574285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:16.574305 | orchestrator | 2026-02-13 04:26:16.574328 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-13 04:26:16.574348 | orchestrator | Friday 13 February 2026 04:26:15 +0000 (0:00:02.186) 0:00:27.384 ******* 2026-02-13 04:26:16.574368 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:26:16.574389 | orchestrator | 2026-02-13 04:26:16.574407 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-13 04:26:16.574425 | orchestrator | Friday 13 February 2026 04:26:15 +0000 (0:00:00.139) 0:00:27.524 ******* 2026-02-13 04:26:16.574437 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:26:16.574448 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:26:16.574458 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:26:16.574470 | orchestrator | 2026-02-13 04:26:16.574483 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-13 04:26:16.574495 | orchestrator | Friday 13 February 2026 04:26:15 +0000 (0:00:00.527) 0:00:28.051 ******* 2026-02-13 04:26:16.574509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:16.574571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:16.574595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:16.574610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:16.574623 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:26:16.574636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:16.574648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:16.574659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:16.574688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:21.582910 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:26:21.583038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:21.583058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:21.583072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:21.583083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:21.583095 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:26:21.583107 | orchestrator | 2026-02-13 04:26:21.583119 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-13 04:26:21.583154 | orchestrator | Friday 13 February 2026 04:26:16 +0000 (0:00:00.652) 0:00:28.703 ******* 2026-02-13 04:26:21.583166 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:26:21.583178 | orchestrator | 2026-02-13 04:26:21.583236 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-13 04:26:21.583249 | orchestrator | Friday 13 February 2026 04:26:17 +0000 (0:00:00.719) 0:00:29.423 ******* 2026-02-13 04:26:21.583261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:21.583298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:21.583311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:21.583323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:21.583335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:21.583356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:21.583368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:21.583391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:22.207249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:22.207350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:22.207365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:22.207399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:22.207412 | orchestrator | 2026-02-13 04:26:22.207426 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-13 04:26:22.207438 | orchestrator | Friday 13 February 2026 04:26:21 +0000 (0:00:04.284) 0:00:33.707 ******* 2026-02-13 04:26:22.207451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:22.207478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:22.207510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:22.207522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:22.207534 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:26:22.207547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:22.207567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:22.207586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:22.207606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:22.207626 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:26:22.207693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:23.205577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:23.205670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205716 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:26:23.205727 | orchestrator | 2026-02-13 04:26:23.205738 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-13 04:26:23.205748 | orchestrator | Friday 13 February 2026 04:26:22 +0000 (0:00:00.626) 0:00:34.334 ******* 2026-02-13 04:26:23.205758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:23.205781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:23.205791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205832 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:26:23.205841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:23.205851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:23.205860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:23.205882 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:26:23.205898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-13 04:26:27.453405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 04:26:27.453529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 04:26:27.453544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 04:26:27.453552 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:26:27.453560 | orchestrator | 2026-02-13 04:26:27.453568 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-13 04:26:27.453576 | orchestrator | Friday 13 February 2026 04:26:23 +0000 (0:00:00.999) 0:00:35.333 ******* 2026-02-13 04:26:27.453583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:27.453604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:27.453629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:27.453642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453680 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:27.453696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.782778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.782885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.782901 | orchestrator | 2026-02-13 04:26:35.782915 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-13 04:26:35.782927 | orchestrator | Friday 13 February 2026 04:26:27 +0000 (0:00:04.244) 0:00:39.578 ******* 2026-02-13 04:26:35.782939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:35.782965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:35.782995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:35.783023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:35.783119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040383 | orchestrator | 2026-02-13 04:26:41.040417 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-13 04:26:41.040440 | orchestrator | Friday 13 February 2026 04:26:35 +0000 (0:00:08.323) 0:00:47.902 ******* 2026-02-13 04:26:41.040459 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:26:41.040479 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:26:41.040494 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:26:41.040505 | orchestrator | 2026-02-13 04:26:41.040518 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-13 04:26:41.040529 | orchestrator | Friday 13 February 2026 04:26:37 +0000 (0:00:01.812) 0:00:49.714 ******* 2026-02-13 04:26:41.040541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:41.040573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:41.040612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-13 04:26:41.040647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040676 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:26:41.040766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:27:23.954529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-13 04:27:23.954644 | orchestrator | 2026-02-13 04:27:23.954662 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-13 04:27:23.954675 | orchestrator | Friday 13 February 2026 04:26:41 +0000 (0:00:03.449) 0:00:53.163 ******* 2026-02-13 04:27:23.954687 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:27:23.954699 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:27:23.954710 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:27:23.954721 | orchestrator | 2026-02-13 04:27:23.954732 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-13 04:27:23.954744 | orchestrator | Friday 13 February 2026 04:26:41 +0000 (0:00:00.298) 0:00:53.461 ******* 2026-02-13 04:27:23.954755 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.954766 | orchestrator | 2026-02-13 04:27:23.954777 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-13 04:27:23.954788 | orchestrator | Friday 13 February 2026 04:26:43 +0000 (0:00:02.127) 0:00:55.589 ******* 2026-02-13 04:27:23.954824 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.954836 | orchestrator | 2026-02-13 04:27:23.954847 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-13 04:27:23.954858 | orchestrator | Friday 13 February 2026 04:26:45 +0000 (0:00:02.193) 0:00:57.782 ******* 2026-02-13 04:27:23.954868 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.954879 | orchestrator | 2026-02-13 04:27:23.954890 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-13 04:27:23.954901 | orchestrator | Friday 13 February 2026 04:26:58 +0000 (0:00:12.866) 0:01:10.649 ******* 2026-02-13 04:27:23.954912 | orchestrator | 2026-02-13 04:27:23.954923 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-13 04:27:23.954933 | orchestrator | Friday 13 February 2026 04:26:58 +0000 (0:00:00.071) 0:01:10.721 ******* 2026-02-13 04:27:23.954944 | orchestrator | 2026-02-13 04:27:23.954955 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-13 04:27:23.954966 | orchestrator | Friday 13 February 2026 04:26:58 +0000 (0:00:00.072) 0:01:10.793 ******* 2026-02-13 04:27:23.954977 | orchestrator | 2026-02-13 04:27:23.954987 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-13 04:27:23.955014 | orchestrator | Friday 13 February 2026 04:26:58 +0000 (0:00:00.249) 0:01:11.042 ******* 2026-02-13 04:27:23.955026 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.955037 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:27:23.955048 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:27:23.955061 | orchestrator | 2026-02-13 04:27:23.955074 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-13 04:27:23.955087 | orchestrator | Friday 13 February 2026 04:27:04 +0000 (0:00:05.715) 0:01:16.757 ******* 2026-02-13 04:27:23.955099 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.955111 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:27:23.955124 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:27:23.955136 | orchestrator | 2026-02-13 04:27:23.955148 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-13 04:27:23.955161 | orchestrator | Friday 13 February 2026 04:27:09 +0000 (0:00:05.156) 0:01:21.914 ******* 2026-02-13 04:27:23.955205 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:27:23.955219 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:27:23.955237 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.955255 | orchestrator | 2026-02-13 04:27:23.955274 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-13 04:27:23.955293 | orchestrator | Friday 13 February 2026 04:27:17 +0000 (0:00:08.115) 0:01:30.030 ******* 2026-02-13 04:27:23.955311 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:27:23.955326 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:27:23.955337 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:27:23.955349 | orchestrator | 2026-02-13 04:27:23.955362 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:27:23.955377 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:27:23.955390 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:27:23.955402 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:27:23.955415 | orchestrator | 2026-02-13 04:27:23.955427 | orchestrator | 2026-02-13 04:27:23.955441 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:27:23.955454 | orchestrator | Friday 13 February 2026 04:27:23 +0000 (0:00:05.720) 0:01:35.750 ******* 2026-02-13 04:27:23.955465 | orchestrator | =============================================================================== 2026-02-13 04:27:23.955486 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 12.87s 2026-02-13 04:27:23.955497 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.32s 2026-02-13 04:27:23.955526 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 8.12s 2026-02-13 04:27:23.955538 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.37s 2026-02-13 04:27:23.955549 | orchestrator | aodh : Restart aodh-notifier container ---------------------------------- 5.72s 2026-02-13 04:27:23.955560 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.72s 2026-02-13 04:27:23.955570 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 5.16s 2026-02-13 04:27:23.955581 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.28s 2026-02-13 04:27:23.955591 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.24s 2026-02-13 04:27:23.955602 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.70s 2026-02-13 04:27:23.955612 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 3.70s 2026-02-13 04:27:23.955623 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.45s 2026-02-13 04:27:23.955634 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.41s 2026-02-13 04:27:23.955644 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.23s 2026-02-13 04:27:23.955655 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.20s 2026-02-13 04:27:23.955665 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.19s 2026-02-13 04:27:23.955676 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.19s 2026-02-13 04:27:23.955687 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.13s 2026-02-13 04:27:23.955697 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.81s 2026-02-13 04:27:23.955708 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.00s 2026-02-13 04:27:26.303310 | orchestrator | 2026-02-13 04:27:26 | INFO  | Task 51b387d8-8b7b-4fec-8b16-2ce3d6325b30 (kolla-ceph-rgw) was prepared for execution. 2026-02-13 04:27:26.303410 | orchestrator | 2026-02-13 04:27:26 | INFO  | It takes a moment until task 51b387d8-8b7b-4fec-8b16-2ce3d6325b30 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-13 04:28:01.453456 | orchestrator | 2026-02-13 04:28:01.453583 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:28:01.453600 | orchestrator | 2026-02-13 04:28:01.453613 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:28:01.453624 | orchestrator | Friday 13 February 2026 04:27:30 +0000 (0:00:00.284) 0:00:00.284 ******* 2026-02-13 04:28:01.453636 | orchestrator | ok: [testbed-manager] 2026-02-13 04:28:01.453648 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:28:01.453659 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:28:01.453686 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:28:01.453697 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:28:01.453708 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:28:01.453719 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:28:01.453729 | orchestrator | 2026-02-13 04:28:01.453740 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:28:01.453752 | orchestrator | Friday 13 February 2026 04:27:31 +0000 (0:00:00.837) 0:00:01.121 ******* 2026-02-13 04:28:01.453763 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453774 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453785 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453795 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453806 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453838 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453850 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-13 04:28:01.453861 | orchestrator | 2026-02-13 04:28:01.453872 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-13 04:28:01.453882 | orchestrator | 2026-02-13 04:28:01.453893 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-13 04:28:01.453904 | orchestrator | Friday 13 February 2026 04:27:32 +0000 (0:00:00.749) 0:00:01.871 ******* 2026-02-13 04:28:01.453916 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:28:01.453928 | orchestrator | 2026-02-13 04:28:01.453939 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-13 04:28:01.453951 | orchestrator | Friday 13 February 2026 04:27:33 +0000 (0:00:01.523) 0:00:03.395 ******* 2026-02-13 04:28:01.453977 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-13 04:28:01.453988 | orchestrator | 2026-02-13 04:28:01.454001 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-13 04:28:01.454066 | orchestrator | Friday 13 February 2026 04:27:37 +0000 (0:00:03.588) 0:00:06.983 ******* 2026-02-13 04:28:01.454081 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-13 04:28:01.454096 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-13 04:28:01.454108 | orchestrator | 2026-02-13 04:28:01.454121 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-13 04:28:01.454134 | orchestrator | Friday 13 February 2026 04:27:43 +0000 (0:00:06.098) 0:00:13.081 ******* 2026-02-13 04:28:01.454146 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-13 04:28:01.454185 | orchestrator | 2026-02-13 04:28:01.454199 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-13 04:28:01.454211 | orchestrator | Friday 13 February 2026 04:27:46 +0000 (0:00:03.116) 0:00:16.197 ******* 2026-02-13 04:28:01.454224 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:28:01.454237 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-13 04:28:01.454249 | orchestrator | 2026-02-13 04:28:01.454262 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-13 04:28:01.454274 | orchestrator | Friday 13 February 2026 04:27:50 +0000 (0:00:03.756) 0:00:19.954 ******* 2026-02-13 04:28:01.454286 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-13 04:28:01.454299 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-13 04:28:01.454312 | orchestrator | 2026-02-13 04:28:01.454324 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-13 04:28:01.454337 | orchestrator | Friday 13 February 2026 04:27:56 +0000 (0:00:06.028) 0:00:25.982 ******* 2026-02-13 04:28:01.454350 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-13 04:28:01.454361 | orchestrator | 2026-02-13 04:28:01.454372 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:28:01.454382 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454394 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454405 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454416 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454427 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454464 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454477 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:01.454497 | orchestrator | 2026-02-13 04:28:01.454515 | orchestrator | 2026-02-13 04:28:01.454534 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:28:01.454565 | orchestrator | Friday 13 February 2026 04:28:00 +0000 (0:00:04.762) 0:00:30.744 ******* 2026-02-13 04:28:01.454594 | orchestrator | =============================================================================== 2026-02-13 04:28:01.454614 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.10s 2026-02-13 04:28:01.454646 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.03s 2026-02-13 04:28:01.454663 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.76s 2026-02-13 04:28:01.454681 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.76s 2026-02-13 04:28:01.454698 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.59s 2026-02-13 04:28:01.454715 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.12s 2026-02-13 04:28:01.454734 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.52s 2026-02-13 04:28:01.454753 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.84s 2026-02-13 04:28:01.454771 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2026-02-13 04:28:03.812322 | orchestrator | 2026-02-13 04:28:03 | INFO  | Task f1e2d176-32ce-4201-8601-276986f0e916 (gnocchi) was prepared for execution. 2026-02-13 04:28:03.812408 | orchestrator | 2026-02-13 04:28:03 | INFO  | It takes a moment until task f1e2d176-32ce-4201-8601-276986f0e916 (gnocchi) has been started and output is visible here. 2026-02-13 04:28:08.949753 | orchestrator | 2026-02-13 04:28:08.949864 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:28:08.949882 | orchestrator | 2026-02-13 04:28:08.949894 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:28:08.949906 | orchestrator | Friday 13 February 2026 04:28:07 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-13 04:28:08.949917 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:28:08.949929 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:28:08.949940 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:28:08.949951 | orchestrator | 2026-02-13 04:28:08.949962 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:28:08.949973 | orchestrator | Friday 13 February 2026 04:28:08 +0000 (0:00:00.352) 0:00:00.612 ******* 2026-02-13 04:28:08.949984 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-13 04:28:08.949995 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-13 04:28:08.950007 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-13 04:28:08.950071 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-13 04:28:08.950083 | orchestrator | 2026-02-13 04:28:08.950095 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-13 04:28:08.950106 | orchestrator | skipping: no hosts matched 2026-02-13 04:28:08.950118 | orchestrator | 2026-02-13 04:28:08.950129 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:28:08.950141 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:08.950201 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:08.950241 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:28:08.950252 | orchestrator | 2026-02-13 04:28:08.950263 | orchestrator | 2026-02-13 04:28:08.950274 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:28:08.950285 | orchestrator | Friday 13 February 2026 04:28:08 +0000 (0:00:00.355) 0:00:00.968 ******* 2026-02-13 04:28:08.950297 | orchestrator | =============================================================================== 2026-02-13 04:28:08.950311 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-02-13 04:28:08.950324 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-13 04:28:11.308726 | orchestrator | 2026-02-13 04:28:11 | INFO  | Task bb401106-e6b8-4379-977d-37668c02952e (manila) was prepared for execution. 2026-02-13 04:28:11.308823 | orchestrator | 2026-02-13 04:28:11 | INFO  | It takes a moment until task bb401106-e6b8-4379-977d-37668c02952e (manila) has been started and output is visible here. 2026-02-13 04:28:52.106242 | orchestrator | 2026-02-13 04:28:52.106363 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:28:52.106381 | orchestrator | 2026-02-13 04:28:52.106394 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:28:52.106406 | orchestrator | Friday 13 February 2026 04:28:15 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-13 04:28:52.106417 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:28:52.106429 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:28:52.106440 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:28:52.106451 | orchestrator | 2026-02-13 04:28:52.106462 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:28:52.106473 | orchestrator | Friday 13 February 2026 04:28:15 +0000 (0:00:00.366) 0:00:00.632 ******* 2026-02-13 04:28:52.106484 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-13 04:28:52.106496 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-13 04:28:52.106507 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-13 04:28:52.106518 | orchestrator | 2026-02-13 04:28:52.106529 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-13 04:28:52.106540 | orchestrator | 2026-02-13 04:28:52.106551 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-13 04:28:52.106578 | orchestrator | Friday 13 February 2026 04:28:16 +0000 (0:00:00.447) 0:00:01.080 ******* 2026-02-13 04:28:52.106590 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:28:52.106602 | orchestrator | 2026-02-13 04:28:52.106613 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-13 04:28:52.106624 | orchestrator | Friday 13 February 2026 04:28:16 +0000 (0:00:00.569) 0:00:01.650 ******* 2026-02-13 04:28:52.106635 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:28:52.106646 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:28:52.106658 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:28:52.106668 | orchestrator | 2026-02-13 04:28:52.106679 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-13 04:28:52.106690 | orchestrator | Friday 13 February 2026 04:28:17 +0000 (0:00:00.465) 0:00:02.115 ******* 2026-02-13 04:28:52.106701 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-13 04:28:52.106712 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-13 04:28:52.106723 | orchestrator | 2026-02-13 04:28:52.106734 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-13 04:28:52.106745 | orchestrator | Friday 13 February 2026 04:28:23 +0000 (0:00:06.472) 0:00:08.588 ******* 2026-02-13 04:28:52.106756 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-13 04:28:52.106793 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-13 04:28:52.106804 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-13 04:28:52.106816 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-13 04:28:52.106826 | orchestrator | 2026-02-13 04:28:52.106837 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-13 04:28:52.106848 | orchestrator | Friday 13 February 2026 04:28:36 +0000 (0:00:12.231) 0:00:20.819 ******* 2026-02-13 04:28:52.106859 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:28:52.106870 | orchestrator | 2026-02-13 04:28:52.106880 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-13 04:28:52.106892 | orchestrator | Friday 13 February 2026 04:28:39 +0000 (0:00:03.070) 0:00:23.889 ******* 2026-02-13 04:28:52.106902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:28:52.106913 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-13 04:28:52.106924 | orchestrator | 2026-02-13 04:28:52.106934 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-13 04:28:52.106945 | orchestrator | Friday 13 February 2026 04:28:43 +0000 (0:00:03.838) 0:00:27.728 ******* 2026-02-13 04:28:52.106956 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:28:52.106966 | orchestrator | 2026-02-13 04:28:52.106977 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-13 04:28:52.106988 | orchestrator | Friday 13 February 2026 04:28:46 +0000 (0:00:03.095) 0:00:30.824 ******* 2026-02-13 04:28:52.106999 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-13 04:28:52.107010 | orchestrator | 2026-02-13 04:28:52.107021 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-13 04:28:52.107032 | orchestrator | Friday 13 February 2026 04:28:49 +0000 (0:00:03.709) 0:00:34.533 ******* 2026-02-13 04:28:52.107063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:28:52.107083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:28:52.107095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:28:52.107115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:28:52.107127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:28:52.107165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:28:52.107186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.395936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.396108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.397038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.397061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.397074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:02.397086 | orchestrator | 2026-02-13 04:29:02.397099 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-13 04:29:02.397112 | orchestrator | Friday 13 February 2026 04:28:52 +0000 (0:00:02.300) 0:00:36.834 ******* 2026-02-13 04:29:02.397124 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:29:02.397135 | orchestrator | 2026-02-13 04:29:02.397164 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-13 04:29:02.397175 | orchestrator | Friday 13 February 2026 04:28:52 +0000 (0:00:00.565) 0:00:37.399 ******* 2026-02-13 04:29:02.397186 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:29:02.397197 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:29:02.397208 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:29:02.397219 | orchestrator | 2026-02-13 04:29:02.397230 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-13 04:29:02.397241 | orchestrator | Friday 13 February 2026 04:28:53 +0000 (0:00:00.950) 0:00:38.349 ******* 2026-02-13 04:29:02.397253 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397284 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397296 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397317 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397336 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397347 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397358 | orchestrator | 2026-02-13 04:29:02.397369 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-13 04:29:02.397380 | orchestrator | Friday 13 February 2026 04:28:55 +0000 (0:00:01.734) 0:00:40.083 ******* 2026-02-13 04:29:02.397391 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397402 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397412 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397423 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397434 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-13 04:29:02.397445 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-13 04:29:02.397455 | orchestrator | 2026-02-13 04:29:02.397466 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-13 04:29:02.397477 | orchestrator | Friday 13 February 2026 04:28:56 +0000 (0:00:01.266) 0:00:41.349 ******* 2026-02-13 04:29:02.397489 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-13 04:29:02.397500 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-13 04:29:02.397511 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-13 04:29:02.397522 | orchestrator | 2026-02-13 04:29:02.397533 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-13 04:29:02.397544 | orchestrator | Friday 13 February 2026 04:28:57 +0000 (0:00:00.707) 0:00:42.057 ******* 2026-02-13 04:29:02.397554 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:29:02.397565 | orchestrator | 2026-02-13 04:29:02.397576 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-13 04:29:02.397599 | orchestrator | Friday 13 February 2026 04:28:57 +0000 (0:00:00.133) 0:00:42.191 ******* 2026-02-13 04:29:02.397621 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:29:02.397632 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:29:02.397643 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:29:02.397653 | orchestrator | 2026-02-13 04:29:02.397665 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-13 04:29:02.397675 | orchestrator | Friday 13 February 2026 04:28:57 +0000 (0:00:00.447) 0:00:42.638 ******* 2026-02-13 04:29:02.397686 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:29:02.397697 | orchestrator | 2026-02-13 04:29:02.397708 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-13 04:29:02.397726 | orchestrator | Friday 13 February 2026 04:28:58 +0000 (0:00:00.578) 0:00:43.216 ******* 2026-02-13 04:29:02.397746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:03.233666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:03.233751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:03.233763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:03.233879 | orchestrator | 2026-02-13 04:29:03.233888 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-13 04:29:03.233897 | orchestrator | Friday 13 February 2026 04:29:02 +0000 (0:00:03.918) 0:00:47.135 ******* 2026-02-13 04:29:03.233910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:03.901075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901305 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:29:03.901320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:03.901356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901422 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:29:03.901434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:03.901445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:03.901487 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:29:03.901499 | orchestrator | 2026-02-13 04:29:03.901511 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-13 04:29:03.901524 | orchestrator | Friday 13 February 2026 04:29:03 +0000 (0:00:00.858) 0:00:47.994 ******* 2026-02-13 04:29:03.901549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:08.698900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699087 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:29:08.699104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:08.699166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699267 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:29:08.699278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:08.699300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:08.699334 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:29:08.699345 | orchestrator | 2026-02-13 04:29:08.699359 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-13 04:29:08.699371 | orchestrator | Friday 13 February 2026 04:29:04 +0000 (0:00:00.879) 0:00:48.873 ******* 2026-02-13 04:29:08.699396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:15.430930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:15.431036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:15.431047 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:15.431188 | orchestrator | 2026-02-13 04:29:15.431201 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-13 04:29:15.431212 | orchestrator | Friday 13 February 2026 04:29:08 +0000 (0:00:04.759) 0:00:53.632 ******* 2026-02-13 04:29:15.431235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:19.471745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:19.471831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:29:19.471843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:19.471876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:19.471926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:19.471942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:29:19.471966 | orchestrator | 2026-02-13 04:29:19.471975 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-13 04:29:19.471988 | orchestrator | Friday 13 February 2026 04:29:15 +0000 (0:00:06.552) 0:01:00.185 ******* 2026-02-13 04:29:19.471996 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-13 04:29:19.472071 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-13 04:29:19.472082 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-13 04:29:19.472096 | orchestrator | 2026-02-13 04:29:19.472104 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-13 04:29:19.472111 | orchestrator | Friday 13 February 2026 04:29:18 +0000 (0:00:03.374) 0:01:03.559 ******* 2026-02-13 04:29:19.472127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:22.658437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658567 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:29:22.658581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:22.658628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658665 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:29:22.658671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-13 04:29:22.658678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 04:29:22.658706 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:29:22.658713 | orchestrator | 2026-02-13 04:29:22.658721 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-13 04:29:22.658728 | orchestrator | Friday 13 February 2026 04:29:19 +0000 (0:00:00.648) 0:01:04.208 ******* 2026-02-13 04:29:22.658740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:30:03.501327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:30:03.501451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-13 04:30:03.501470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-13 04:30:03.501656 | orchestrator | 2026-02-13 04:30:03.501670 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-13 04:30:03.501683 | orchestrator | Friday 13 February 2026 04:29:22 +0000 (0:00:03.193) 0:01:07.402 ******* 2026-02-13 04:30:03.501694 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:03.501706 | orchestrator | 2026-02-13 04:30:03.501717 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-13 04:30:03.501728 | orchestrator | Friday 13 February 2026 04:29:24 +0000 (0:00:02.078) 0:01:09.481 ******* 2026-02-13 04:30:03.501739 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:03.501749 | orchestrator | 2026-02-13 04:30:03.501760 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-13 04:30:03.501771 | orchestrator | Friday 13 February 2026 04:29:26 +0000 (0:00:02.177) 0:01:11.658 ******* 2026-02-13 04:30:03.501782 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:03.501792 | orchestrator | 2026-02-13 04:30:03.501803 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-13 04:30:03.501814 | orchestrator | Friday 13 February 2026 04:30:03 +0000 (0:00:36.251) 0:01:47.910 ******* 2026-02-13 04:30:03.501825 | orchestrator | 2026-02-13 04:30:03.501846 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-13 04:30:52.582999 | orchestrator | Friday 13 February 2026 04:30:03 +0000 (0:00:00.073) 0:01:47.984 ******* 2026-02-13 04:30:52.583166 | orchestrator | 2026-02-13 04:30:52.583182 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-13 04:30:52.583190 | orchestrator | Friday 13 February 2026 04:30:03 +0000 (0:00:00.074) 0:01:48.058 ******* 2026-02-13 04:30:52.583197 | orchestrator | 2026-02-13 04:30:52.583203 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-13 04:30:52.583211 | orchestrator | Friday 13 February 2026 04:30:03 +0000 (0:00:00.075) 0:01:48.134 ******* 2026-02-13 04:30:52.583218 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:52.583227 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:30:52.583234 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:30:52.583241 | orchestrator | 2026-02-13 04:30:52.583248 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-13 04:30:52.583255 | orchestrator | Friday 13 February 2026 04:30:18 +0000 (0:00:15.041) 0:02:03.175 ******* 2026-02-13 04:30:52.583262 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:52.583268 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:30:52.583296 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:30:52.583303 | orchestrator | 2026-02-13 04:30:52.583310 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-13 04:30:52.583317 | orchestrator | Friday 13 February 2026 04:30:24 +0000 (0:00:05.935) 0:02:09.110 ******* 2026-02-13 04:30:52.583324 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:52.583330 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:30:52.583336 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:30:52.583342 | orchestrator | 2026-02-13 04:30:52.583349 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-13 04:30:52.583356 | orchestrator | Friday 13 February 2026 04:30:34 +0000 (0:00:10.421) 0:02:19.531 ******* 2026-02-13 04:30:52.583363 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:30:52.583370 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:30:52.583376 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:30:52.583383 | orchestrator | 2026-02-13 04:30:52.583389 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:30:52.583398 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:30:52.583406 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:30:52.583413 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:30:52.583419 | orchestrator | 2026-02-13 04:30:52.583425 | orchestrator | 2026-02-13 04:30:52.583432 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:30:52.583438 | orchestrator | Friday 13 February 2026 04:30:52 +0000 (0:00:17.230) 0:02:36.762 ******* 2026-02-13 04:30:52.583445 | orchestrator | =============================================================================== 2026-02-13 04:30:52.583452 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 36.25s 2026-02-13 04:30:52.583459 | orchestrator | manila : Restart manila-share container -------------------------------- 17.23s 2026-02-13 04:30:52.583466 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.04s 2026-02-13 04:30:52.583473 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 12.23s 2026-02-13 04:30:52.583493 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.42s 2026-02-13 04:30:52.583501 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.55s 2026-02-13 04:30:52.583507 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.47s 2026-02-13 04:30:52.583514 | orchestrator | manila : Restart manila-data container ---------------------------------- 5.94s 2026-02-13 04:30:52.583520 | orchestrator | manila : Copying over config.json files for services -------------------- 4.76s 2026-02-13 04:30:52.583527 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 3.92s 2026-02-13 04:30:52.583534 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.84s 2026-02-13 04:30:52.583541 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.71s 2026-02-13 04:30:52.583548 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.37s 2026-02-13 04:30:52.583555 | orchestrator | manila : Check manila containers ---------------------------------------- 3.19s 2026-02-13 04:30:52.583563 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.10s 2026-02-13 04:30:52.583570 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.07s 2026-02-13 04:30:52.583577 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.30s 2026-02-13 04:30:52.583583 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.18s 2026-02-13 04:30:52.583589 | orchestrator | manila : Creating Manila database --------------------------------------- 2.08s 2026-02-13 04:30:52.583602 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.73s 2026-02-13 04:30:52.881285 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-13 04:31:05.089710 | orchestrator | 2026-02-13 04:31:05 | INFO  | Task 3c24f20c-1585-4c48-b05c-387a2496407c (netdata) was prepared for execution. 2026-02-13 04:31:05.089806 | orchestrator | 2026-02-13 04:31:05 | INFO  | It takes a moment until task 3c24f20c-1585-4c48-b05c-387a2496407c (netdata) has been started and output is visible here. 2026-02-13 04:32:41.859222 | orchestrator | 2026-02-13 04:32:41.859320 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:32:41.859331 | orchestrator | 2026-02-13 04:32:41.859343 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:32:41.859357 | orchestrator | Friday 13 February 2026 04:31:09 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-02-13 04:32:41.859370 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-13 04:32:41.859382 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-13 04:32:41.859394 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-13 04:32:41.859407 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-13 04:32:41.859420 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-13 04:32:41.859432 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-13 04:32:41.859443 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-13 04:32:41.859451 | orchestrator | 2026-02-13 04:32:41.859459 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-13 04:32:41.859466 | orchestrator | 2026-02-13 04:32:41.859473 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-13 04:32:41.859481 | orchestrator | Friday 13 February 2026 04:31:10 +0000 (0:00:00.842) 0:00:01.073 ******* 2026-02-13 04:32:41.859490 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:32:41.859499 | orchestrator | 2026-02-13 04:32:41.859507 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-13 04:32:41.859514 | orchestrator | Friday 13 February 2026 04:31:11 +0000 (0:00:01.318) 0:00:02.392 ******* 2026-02-13 04:32:41.859521 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:41.859530 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:41.859538 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:41.859545 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:41.859552 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:41.859559 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:41.859586 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:41.859594 | orchestrator | 2026-02-13 04:32:41.859601 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-13 04:32:41.859609 | orchestrator | Friday 13 February 2026 04:31:13 +0000 (0:00:01.834) 0:00:04.227 ******* 2026-02-13 04:32:41.859616 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:41.859623 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:41.859630 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:41.859637 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:41.859645 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:41.859658 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:41.859670 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:41.859681 | orchestrator | 2026-02-13 04:32:41.859693 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-13 04:32:41.859705 | orchestrator | Friday 13 February 2026 04:31:16 +0000 (0:00:03.081) 0:00:07.309 ******* 2026-02-13 04:32:41.859718 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.859731 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:32:41.859744 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:32:41.859782 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:32:41.859792 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:32:41.859812 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:32:41.859820 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:32:41.859836 | orchestrator | 2026-02-13 04:32:41.859857 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-13 04:32:41.859866 | orchestrator | Friday 13 February 2026 04:31:17 +0000 (0:00:01.527) 0:00:08.836 ******* 2026-02-13 04:32:41.859875 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.859882 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:32:41.859891 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:32:41.859899 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:32:41.859907 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:32:41.859915 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:32:41.859923 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:32:41.859931 | orchestrator | 2026-02-13 04:32:41.859940 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-13 04:32:41.859948 | orchestrator | Friday 13 February 2026 04:31:36 +0000 (0:00:18.445) 0:00:27.281 ******* 2026-02-13 04:32:41.859957 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:32:41.859965 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.859973 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:32:41.859981 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:32:41.859990 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:32:41.859998 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:32:41.860006 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:32:41.860055 | orchestrator | 2026-02-13 04:32:41.860064 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-13 04:32:41.860072 | orchestrator | Friday 13 February 2026 04:32:16 +0000 (0:00:39.778) 0:01:07.060 ******* 2026-02-13 04:32:41.860081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:32:41.860093 | orchestrator | 2026-02-13 04:32:41.860101 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-13 04:32:41.860110 | orchestrator | Friday 13 February 2026 04:32:17 +0000 (0:00:01.619) 0:01:08.679 ******* 2026-02-13 04:32:41.860118 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-13 04:32:41.860128 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-13 04:32:41.860136 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-13 04:32:41.860143 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-13 04:32:41.860167 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-13 04:32:41.860174 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-13 04:32:41.860181 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-13 04:32:41.860189 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-13 04:32:41.860196 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-13 04:32:41.860203 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-13 04:32:41.860210 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-13 04:32:41.860217 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-13 04:32:41.860224 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-13 04:32:41.860231 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-13 04:32:41.860238 | orchestrator | 2026-02-13 04:32:41.860246 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-13 04:32:41.860254 | orchestrator | Friday 13 February 2026 04:32:21 +0000 (0:00:03.372) 0:01:12.052 ******* 2026-02-13 04:32:41.860261 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:41.860268 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:41.860282 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:41.860290 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:41.860297 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:41.860304 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:41.860311 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:41.860318 | orchestrator | 2026-02-13 04:32:41.860326 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-13 04:32:41.860333 | orchestrator | Friday 13 February 2026 04:32:22 +0000 (0:00:01.235) 0:01:13.288 ******* 2026-02-13 04:32:41.860340 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.860347 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:32:41.860355 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:32:41.860362 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:32:41.860369 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:32:41.860376 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:32:41.860383 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:32:41.860390 | orchestrator | 2026-02-13 04:32:41.860398 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-13 04:32:41.860405 | orchestrator | Friday 13 February 2026 04:32:23 +0000 (0:00:01.257) 0:01:14.546 ******* 2026-02-13 04:32:41.860412 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:41.860419 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:41.860426 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:41.860433 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:41.860440 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:41.860447 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:41.860454 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:41.860462 | orchestrator | 2026-02-13 04:32:41.860469 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-13 04:32:41.860476 | orchestrator | Friday 13 February 2026 04:32:24 +0000 (0:00:01.212) 0:01:15.759 ******* 2026-02-13 04:32:41.860483 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:41.860491 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:41.860497 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:41.860505 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:41.860512 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:41.860519 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:41.860526 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:41.860533 | orchestrator | 2026-02-13 04:32:41.860540 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-13 04:32:41.860547 | orchestrator | Friday 13 February 2026 04:32:26 +0000 (0:00:01.702) 0:01:17.461 ******* 2026-02-13 04:32:41.860559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-13 04:32:41.860569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:32:41.860576 | orchestrator | 2026-02-13 04:32:41.860584 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-13 04:32:41.860591 | orchestrator | Friday 13 February 2026 04:32:27 +0000 (0:00:01.337) 0:01:18.799 ******* 2026-02-13 04:32:41.860598 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.860606 | orchestrator | 2026-02-13 04:32:41.860613 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-13 04:32:41.860620 | orchestrator | Friday 13 February 2026 04:32:30 +0000 (0:00:02.113) 0:01:20.912 ******* 2026-02-13 04:32:41.860627 | orchestrator | changed: [testbed-manager] 2026-02-13 04:32:41.860634 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:32:41.860642 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:32:41.860649 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:32:41.860656 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:32:41.860663 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:32:41.860670 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:32:41.860682 | orchestrator | 2026-02-13 04:32:41.860689 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:32:41.860696 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:41.860704 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:41.860711 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:41.860718 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:41.860731 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:42.256651 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:42.256755 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:32:42.256772 | orchestrator | 2026-02-13 04:32:42.256786 | orchestrator | 2026-02-13 04:32:42.256800 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:32:42.256814 | orchestrator | Friday 13 February 2026 04:32:41 +0000 (0:00:11.774) 0:01:32.687 ******* 2026-02-13 04:32:42.256826 | orchestrator | =============================================================================== 2026-02-13 04:32:42.256838 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.78s 2026-02-13 04:32:42.256850 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.45s 2026-02-13 04:32:42.256862 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.77s 2026-02-13 04:32:42.256874 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.37s 2026-02-13 04:32:42.256886 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.08s 2026-02-13 04:32:42.256898 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.11s 2026-02-13 04:32:42.256910 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.83s 2026-02-13 04:32:42.256921 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.70s 2026-02-13 04:32:42.256933 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.62s 2026-02-13 04:32:42.256945 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.53s 2026-02-13 04:32:42.256957 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.34s 2026-02-13 04:32:42.256969 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.32s 2026-02-13 04:32:42.256981 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.26s 2026-02-13 04:32:42.256992 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.24s 2026-02-13 04:32:42.257005 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.21s 2026-02-13 04:32:42.257115 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-02-13 04:32:46.417235 | orchestrator | 2026-02-13 04:32:46 | INFO  | Task e86edc9d-d551-4215-b136-cda328562b2a (prometheus) was prepared for execution. 2026-02-13 04:32:46.417353 | orchestrator | 2026-02-13 04:32:46 | INFO  | It takes a moment until task e86edc9d-d551-4215-b136-cda328562b2a (prometheus) has been started and output is visible here. 2026-02-13 04:32:55.920682 | orchestrator | 2026-02-13 04:32:55.920818 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:32:55.920838 | orchestrator | 2026-02-13 04:32:55.920876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:32:55.920902 | orchestrator | Friday 13 February 2026 04:32:50 +0000 (0:00:00.292) 0:00:00.292 ******* 2026-02-13 04:32:55.920913 | orchestrator | ok: [testbed-manager] 2026-02-13 04:32:55.920925 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:32:55.920936 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:32:55.920947 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:32:55.920957 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:32:55.920969 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:32:55.920980 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:32:55.921021 | orchestrator | 2026-02-13 04:32:55.921033 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:32:55.921044 | orchestrator | Friday 13 February 2026 04:32:51 +0000 (0:00:00.832) 0:00:01.124 ******* 2026-02-13 04:32:55.921055 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921067 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921078 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921088 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921099 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921109 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921120 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-13 04:32:55.921131 | orchestrator | 2026-02-13 04:32:55.921142 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-13 04:32:55.921153 | orchestrator | 2026-02-13 04:32:55.921163 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-13 04:32:55.921174 | orchestrator | Friday 13 February 2026 04:32:52 +0000 (0:00:00.969) 0:00:02.094 ******* 2026-02-13 04:32:55.921187 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:32:55.921200 | orchestrator | 2026-02-13 04:32:55.921214 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-13 04:32:55.921227 | orchestrator | Friday 13 February 2026 04:32:53 +0000 (0:00:01.387) 0:00:03.481 ******* 2026-02-13 04:32:55.921243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921261 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 04:32:55.921277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:55.921337 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:55.921365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921390 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:55.921404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:55.921418 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:55.921444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073198 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:32:57.073207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 04:32:57.073258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073307 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:32:57.073320 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:32:57.073348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:01.936771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:01.936888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:01.936917 | orchestrator | 2026-02-13 04:33:01.936935 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-13 04:33:01.936954 | orchestrator | Friday 13 February 2026 04:32:57 +0000 (0:00:03.298) 0:00:06.779 ******* 2026-02-13 04:33:01.936969 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 04:33:01.937067 | orchestrator | 2026-02-13 04:33:01.937085 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-13 04:33:01.937100 | orchestrator | Friday 13 February 2026 04:32:58 +0000 (0:00:01.691) 0:00:08.471 ******* 2026-02-13 04:33:01.937116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 04:33:01.937162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937196 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937272 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937281 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:01.937300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:01.937311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:01.937322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:01.937339 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:01.937359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.413699 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.413834 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.413890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.413911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.413923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.413934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.413959 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.414107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.414127 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 04:33:04.414150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.414161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.414171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:04.414181 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.414192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:04.414211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:05.415518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:05.415682 | orchestrator | 2026-02-13 04:33:05.415732 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-13 04:33:05.415753 | orchestrator | Friday 13 February 2026 04:33:04 +0000 (0:00:05.647) 0:00:14.119 ******* 2026-02-13 04:33:05.415776 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-13 04:33:05.415799 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:05.415819 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:05.415914 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-13 04:33:05.415969 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:05.416034 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:33:05.416073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:05.416093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:05.416112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:05.416131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:05.416150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:05.416177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:05.416197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:05.416231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.000819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.000925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.000938 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:33:06.000948 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:33:06.000957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.000965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.001044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.001079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.001088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.001113 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:33:06.001147 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.001164 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.001176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.001188 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:33:06.001200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.001212 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.001231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.001244 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:33:06.001256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.001284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.957757 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.957886 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:33:06.957906 | orchestrator | 2026-02-13 04:33:06.957921 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-13 04:33:06.957934 | orchestrator | Friday 13 February 2026 04:33:05 +0000 (0:00:01.584) 0:00:15.704 ******* 2026-02-13 04:33:06.957946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.957959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.958074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.958093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.958124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.958178 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-13 04:33:06.958192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.958205 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:06.958219 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-13 04:33:06.958235 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.958273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:06.958295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:06.958348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:08.018119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018236 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:33:08.018265 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:08.018289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:08.018350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:08.018368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:08.018419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018431 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:33:08.018443 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:33:08.018455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 04:33:08.018466 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:33:08.018499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:08.018514 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018541 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:33:08.018555 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:08.018580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018608 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:08.018627 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:33:08.018646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 04:33:08.018680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 04:33:11.438784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 04:33:11.438899 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:33:11.438917 | orchestrator | 2026-02-13 04:33:11.438929 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-13 04:33:11.438942 | orchestrator | Friday 13 February 2026 04:33:07 +0000 (0:00:02.009) 0:00:17.713 ******* 2026-02-13 04:33:11.438955 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 04:33:11.439018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439134 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439147 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439158 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:33:11.439169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:11.439189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:11.439201 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:11.439218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:11.439230 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:11.439251 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.213757 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.213899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214102 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214204 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 04:33:14.214254 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:33:14.214380 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:14.214455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:33:18.374843 | orchestrator | 2026-02-13 04:33:18.375079 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-13 04:33:18.375112 | orchestrator | Friday 13 February 2026 04:33:14 +0000 (0:00:06.202) 0:00:23.916 ******* 2026-02-13 04:33:18.375125 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 04:33:18.375166 | orchestrator | 2026-02-13 04:33:18.375178 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-13 04:33:18.375189 | orchestrator | Friday 13 February 2026 04:33:15 +0000 (0:00:00.920) 0:00:24.837 ******* 2026-02-13 04:33:18.375203 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375218 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:18.375229 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375256 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375268 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375280 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375314 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375333 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098585, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375345 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375357 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375373 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375384 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375396 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:18.375422 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007435 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007542 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007585 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007597 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007607 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007635 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098634, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7776108, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:20.007662 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007674 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007684 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007699 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007709 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007719 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007737 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:20.007754 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532252 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532352 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532383 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532395 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532405 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532451 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532462 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532497 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532516 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532561 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098579, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7645113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:21.532589 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532606 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532623 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:21.532653 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.757855 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758073 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758095 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758141 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758154 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758165 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758195 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758212 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758224 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758242 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758254 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758265 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:22.758295 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.941933 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098608, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7748623, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:23.942213 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942269 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942292 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942312 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942333 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942346 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942385 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942407 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942419 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942433 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942446 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942472 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:23.942497 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.342977 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343061 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343074 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343083 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343092 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343101 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343124 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098574, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7633827, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:25.343170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343180 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343188 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343196 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343205 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343231 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:25.343242 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609567 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609671 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609688 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098587, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7662592, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:26.609701 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609713 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609764 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609775 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:33:26.609805 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609816 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609828 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609850 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609862 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:33:26.609874 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609899 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609910 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:26.609930 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.015790 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016110 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016178 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:33:32.016204 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016271 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016283 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098601, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7736702, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:32.016294 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016329 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:33:32.016343 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016357 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016370 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016391 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:33:32.016404 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-13 04:33:32.016416 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:33:32.016447 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098588, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7665348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:32.016459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098583, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7655113, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:32.016480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098628, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7770772, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738589 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098571, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7627254, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738729 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098670, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738787 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098622, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7761273, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738807 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098576, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7635114, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738845 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098572, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7630355, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738864 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098593, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7695115, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738882 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098590, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7668815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098664, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.783064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-13 04:33:58.738981 | orchestrator | 2026-02-13 04:33:58.739005 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-13 04:33:58.739023 | orchestrator | Friday 13 February 2026 04:33:39 +0000 (0:00:24.245) 0:00:49.082 ******* 2026-02-13 04:33:58.739039 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 04:33:58.739057 | orchestrator | 2026-02-13 04:33:58.739074 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-13 04:33:58.739104 | orchestrator | Friday 13 February 2026 04:33:40 +0000 (0:00:00.746) 0:00:49.829 ******* 2026-02-13 04:33:58.739123 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739144 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739163 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739183 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739205 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739222 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739240 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739257 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739275 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739293 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739311 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739330 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739347 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739368 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739388 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739409 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739428 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739447 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739463 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739483 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739499 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739516 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739531 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739547 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739565 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739581 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739598 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739615 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739632 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739661 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739678 | orchestrator | [WARNING]: Skipped 2026-02-13 04:33:58.739697 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739715 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-13 04:33:58.739731 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-13 04:33:58.739748 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-13 04:33:58.739766 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 04:33:58.739785 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-13 04:33:58.739803 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:33:58.739819 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-13 04:33:58.739836 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-13 04:33:58.739852 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-13 04:33:58.739869 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-13 04:33:58.739886 | orchestrator | 2026-02-13 04:33:58.739933 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-13 04:33:58.739969 | orchestrator | Friday 13 February 2026 04:33:41 +0000 (0:00:01.756) 0:00:51.586 ******* 2026-02-13 04:33:58.739986 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:33:58.740007 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:33:58.740024 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:33:58.740041 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:33:58.740058 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:33:58.740075 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:33:58.740112 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:34:15.440546 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.440666 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:34:15.440683 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.440695 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-13 04:34:15.440706 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.440717 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-13 04:34:15.440728 | orchestrator | 2026-02-13 04:34:15.440741 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-13 04:34:15.440752 | orchestrator | Friday 13 February 2026 04:33:58 +0000 (0:00:16.857) 0:01:08.443 ******* 2026-02-13 04:34:15.440763 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440774 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.440785 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440796 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.440807 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440818 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.440829 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440840 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.440851 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440862 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.440873 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-13 04:34:15.440884 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.441031 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-13 04:34:15.441047 | orchestrator | 2026-02-13 04:34:15.441059 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-13 04:34:15.441071 | orchestrator | Friday 13 February 2026 04:34:01 +0000 (0:00:02.954) 0:01:11.398 ******* 2026-02-13 04:34:15.441084 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441099 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.441112 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441124 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.441137 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441149 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.441161 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441198 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.441211 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-13 04:34:15.441224 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441253 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.441266 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-13 04:34:15.441278 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.441291 | orchestrator | 2026-02-13 04:34:15.441303 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-13 04:34:15.441316 | orchestrator | Friday 13 February 2026 04:34:03 +0000 (0:00:01.717) 0:01:13.115 ******* 2026-02-13 04:34:15.441328 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 04:34:15.441341 | orchestrator | 2026-02-13 04:34:15.441354 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-13 04:34:15.441367 | orchestrator | Friday 13 February 2026 04:34:04 +0000 (0:00:00.737) 0:01:13.853 ******* 2026-02-13 04:34:15.441389 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:15.441402 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.441415 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.441427 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.441438 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.441449 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.441460 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.441470 | orchestrator | 2026-02-13 04:34:15.441481 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-13 04:34:15.441492 | orchestrator | Friday 13 February 2026 04:34:04 +0000 (0:00:00.703) 0:01:14.557 ******* 2026-02-13 04:34:15.441503 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:15.441514 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.441524 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.441535 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.441546 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:34:15.441557 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:34:15.441568 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:34:15.441578 | orchestrator | 2026-02-13 04:34:15.441590 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-13 04:34:15.441618 | orchestrator | Friday 13 February 2026 04:34:07 +0000 (0:00:02.260) 0:01:16.817 ******* 2026-02-13 04:34:15.441630 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441641 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:15.441652 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441663 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.441674 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441685 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441695 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.441706 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.441717 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441728 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.441739 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441749 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.441760 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-13 04:34:15.441781 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.441792 | orchestrator | 2026-02-13 04:34:15.441803 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-13 04:34:15.441814 | orchestrator | Friday 13 February 2026 04:34:08 +0000 (0:00:01.424) 0:01:18.242 ******* 2026-02-13 04:34:15.441825 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.441836 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.441847 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.441857 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.441868 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.441879 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.441917 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.441936 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.441955 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.441974 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.441992 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-13 04:34:15.442011 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.442078 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-13 04:34:15.442090 | orchestrator | 2026-02-13 04:34:15.442101 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-13 04:34:15.442112 | orchestrator | Friday 13 February 2026 04:34:09 +0000 (0:00:01.444) 0:01:19.687 ******* 2026-02-13 04:34:15.442123 | orchestrator | [WARNING]: Skipped 2026-02-13 04:34:15.442136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-13 04:34:15.442147 | orchestrator | due to this access issue: 2026-02-13 04:34:15.442158 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-13 04:34:15.442169 | orchestrator | not a directory 2026-02-13 04:34:15.442186 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 04:34:15.442197 | orchestrator | 2026-02-13 04:34:15.442208 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-13 04:34:15.442224 | orchestrator | Friday 13 February 2026 04:34:11 +0000 (0:00:01.162) 0:01:20.849 ******* 2026-02-13 04:34:15.442242 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:15.442259 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.442278 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.443347 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.443433 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.443449 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.443461 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.443472 | orchestrator | 2026-02-13 04:34:15.443486 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-13 04:34:15.443499 | orchestrator | Friday 13 February 2026 04:34:12 +0000 (0:00:00.967) 0:01:21.816 ******* 2026-02-13 04:34:15.443510 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:15.443521 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:34:15.443532 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:34:15.443542 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:34:15.443553 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:34:15.443564 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:34:15.443575 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:34:15.443585 | orchestrator | 2026-02-13 04:34:15.443596 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-13 04:34:15.443634 | orchestrator | Friday 13 February 2026 04:34:13 +0000 (0:00:00.948) 0:01:22.765 ******* 2026-02-13 04:34:15.443676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-13 04:34:17.262434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262461 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262490 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-13 04:34:17.262571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:17.262584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:17.262596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:17.262608 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:17.262620 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:17.262637 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:17.262656 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:17.262675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206187 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206202 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-13 04:34:19.206227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206237 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206278 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206339 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-13 04:34:19.206365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 04:34:19.206426 | orchestrator | 2026-02-13 04:34:19.206442 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-13 04:34:19.206458 | orchestrator | Friday 13 February 2026 04:34:17 +0000 (0:00:04.209) 0:01:26.974 ******* 2026-02-13 04:34:19.206472 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-13 04:34:19.206486 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:34:19.206499 | orchestrator | 2026-02-13 04:34:19.206508 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:34:19.206516 | orchestrator | Friday 13 February 2026 04:34:18 +0000 (0:00:01.239) 0:01:28.214 ******* 2026-02-13 04:34:19.206524 | orchestrator | 2026-02-13 04:34:19.206531 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:34:19.206539 | orchestrator | Friday 13 February 2026 04:34:18 +0000 (0:00:00.248) 0:01:28.462 ******* 2026-02-13 04:34:19.206547 | orchestrator | 2026-02-13 04:34:19.206555 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:34:19.206562 | orchestrator | Friday 13 February 2026 04:34:18 +0000 (0:00:00.072) 0:01:28.535 ******* 2026-02-13 04:34:19.206570 | orchestrator | 2026-02-13 04:34:19.206578 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:34:19.206594 | orchestrator | Friday 13 February 2026 04:34:18 +0000 (0:00:00.070) 0:01:28.605 ******* 2026-02-13 04:36:06.031062 | orchestrator | 2026-02-13 04:36:06.031185 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:36:06.031203 | orchestrator | Friday 13 February 2026 04:34:18 +0000 (0:00:00.066) 0:01:28.672 ******* 2026-02-13 04:36:06.031216 | orchestrator | 2026-02-13 04:36:06.031228 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:36:06.031256 | orchestrator | Friday 13 February 2026 04:34:19 +0000 (0:00:00.071) 0:01:28.743 ******* 2026-02-13 04:36:06.031267 | orchestrator | 2026-02-13 04:36:06.031289 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-13 04:36:06.031300 | orchestrator | Friday 13 February 2026 04:34:19 +0000 (0:00:00.068) 0:01:28.812 ******* 2026-02-13 04:36:06.031311 | orchestrator | 2026-02-13 04:36:06.031323 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-13 04:36:06.031335 | orchestrator | Friday 13 February 2026 04:34:19 +0000 (0:00:00.093) 0:01:28.906 ******* 2026-02-13 04:36:06.031348 | orchestrator | changed: [testbed-manager] 2026-02-13 04:36:06.031361 | orchestrator | 2026-02-13 04:36:06.031374 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-13 04:36:06.031386 | orchestrator | Friday 13 February 2026 04:34:42 +0000 (0:00:22.925) 0:01:51.832 ******* 2026-02-13 04:36:06.031398 | orchestrator | changed: [testbed-manager] 2026-02-13 04:36:06.031410 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:36:06.031422 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:36:06.031436 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:36:06.031447 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:36:06.031460 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:36:06.031471 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:36:06.031483 | orchestrator | 2026-02-13 04:36:06.031494 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-13 04:36:06.031505 | orchestrator | Friday 13 February 2026 04:34:55 +0000 (0:00:13.821) 0:02:05.654 ******* 2026-02-13 04:36:06.031545 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:36:06.031556 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:36:06.031568 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:36:06.031579 | orchestrator | 2026-02-13 04:36:06.031590 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-13 04:36:06.031602 | orchestrator | Friday 13 February 2026 04:35:06 +0000 (0:00:10.126) 0:02:15.781 ******* 2026-02-13 04:36:06.031614 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:36:06.031625 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:36:06.031635 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:36:06.031646 | orchestrator | 2026-02-13 04:36:06.031658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-13 04:36:06.031670 | orchestrator | Friday 13 February 2026 04:35:16 +0000 (0:00:10.443) 0:02:26.224 ******* 2026-02-13 04:36:06.031682 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:36:06.031694 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:36:06.031707 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:36:06.031720 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:36:06.031732 | orchestrator | changed: [testbed-manager] 2026-02-13 04:36:06.031744 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:36:06.031755 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:36:06.031766 | orchestrator | 2026-02-13 04:36:06.031777 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-13 04:36:06.031787 | orchestrator | Friday 13 February 2026 04:35:30 +0000 (0:00:14.006) 0:02:40.230 ******* 2026-02-13 04:36:06.031797 | orchestrator | changed: [testbed-manager] 2026-02-13 04:36:06.031807 | orchestrator | 2026-02-13 04:36:06.031818 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-13 04:36:06.031844 | orchestrator | Friday 13 February 2026 04:35:39 +0000 (0:00:08.675) 0:02:48.906 ******* 2026-02-13 04:36:06.031879 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:36:06.031892 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:36:06.031904 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:36:06.031916 | orchestrator | 2026-02-13 04:36:06.031927 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-13 04:36:06.031938 | orchestrator | Friday 13 February 2026 04:35:49 +0000 (0:00:10.210) 0:02:59.116 ******* 2026-02-13 04:36:06.031949 | orchestrator | changed: [testbed-manager] 2026-02-13 04:36:06.031960 | orchestrator | 2026-02-13 04:36:06.031971 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-13 04:36:06.031983 | orchestrator | Friday 13 February 2026 04:35:55 +0000 (0:00:05.818) 0:03:04.935 ******* 2026-02-13 04:36:06.031994 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:36:06.032005 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:36:06.032017 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:36:06.032027 | orchestrator | 2026-02-13 04:36:06.032038 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:36:06.032051 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-13 04:36:06.032064 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-13 04:36:06.032075 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-13 04:36:06.032085 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-13 04:36:06.032095 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 04:36:06.032127 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 04:36:06.032154 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-13 04:36:06.032165 | orchestrator | 2026-02-13 04:36:06.032175 | orchestrator | 2026-02-13 04:36:06.032187 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:36:06.032197 | orchestrator | Friday 13 February 2026 04:36:05 +0000 (0:00:10.279) 0:03:15.214 ******* 2026-02-13 04:36:06.032208 | orchestrator | =============================================================================== 2026-02-13 04:36:06.032219 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 24.25s 2026-02-13 04:36:06.032228 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.93s 2026-02-13 04:36:06.032237 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.86s 2026-02-13 04:36:06.032249 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.01s 2026-02-13 04:36:06.032256 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.82s 2026-02-13 04:36:06.032263 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.44s 2026-02-13 04:36:06.032269 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.28s 2026-02-13 04:36:06.032276 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.21s 2026-02-13 04:36:06.032282 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.13s 2026-02-13 04:36:06.032289 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.68s 2026-02-13 04:36:06.032296 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.20s 2026-02-13 04:36:06.032302 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.82s 2026-02-13 04:36:06.032308 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.65s 2026-02-13 04:36:06.032315 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.21s 2026-02-13 04:36:06.032321 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.30s 2026-02-13 04:36:06.032328 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.95s 2026-02-13 04:36:06.032334 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.26s 2026-02-13 04:36:06.032341 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.01s 2026-02-13 04:36:06.032347 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.76s 2026-02-13 04:36:06.032354 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 1.72s 2026-02-13 04:36:09.434739 | orchestrator | 2026-02-13 04:36:09 | INFO  | Task 6831c4d9-eca2-4dbc-bd4a-45db6817eab9 (grafana) was prepared for execution. 2026-02-13 04:36:09.434904 | orchestrator | 2026-02-13 04:36:09 | INFO  | It takes a moment until task 6831c4d9-eca2-4dbc-bd4a-45db6817eab9 (grafana) has been started and output is visible here. 2026-02-13 04:36:19.358436 | orchestrator | 2026-02-13 04:36:19.358587 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:36:19.358620 | orchestrator | 2026-02-13 04:36:19.358641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:36:19.358661 | orchestrator | Friday 13 February 2026 04:36:13 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-13 04:36:19.358682 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:36:19.358701 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:36:19.358719 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:36:19.358739 | orchestrator | 2026-02-13 04:36:19.358751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:36:19.358767 | orchestrator | Friday 13 February 2026 04:36:14 +0000 (0:00:00.343) 0:00:00.606 ******* 2026-02-13 04:36:19.358817 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-13 04:36:19.358839 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-13 04:36:19.358890 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-13 04:36:19.358908 | orchestrator | 2026-02-13 04:36:19.358942 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-13 04:36:19.358963 | orchestrator | 2026-02-13 04:36:19.358982 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-13 04:36:19.359001 | orchestrator | Friday 13 February 2026 04:36:14 +0000 (0:00:00.477) 0:00:01.084 ******* 2026-02-13 04:36:19.359022 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:36:19.359043 | orchestrator | 2026-02-13 04:36:19.359063 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-13 04:36:19.359082 | orchestrator | Friday 13 February 2026 04:36:15 +0000 (0:00:00.556) 0:00:01.640 ******* 2026-02-13 04:36:19.359105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359172 | orchestrator | 2026-02-13 04:36:19.359191 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-13 04:36:19.359211 | orchestrator | Friday 13 February 2026 04:36:15 +0000 (0:00:00.918) 0:00:02.559 ******* 2026-02-13 04:36:19.359229 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-13 04:36:19.359249 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-13 04:36:19.359264 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:36:19.359277 | orchestrator | 2026-02-13 04:36:19.359290 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-13 04:36:19.359302 | orchestrator | Friday 13 February 2026 04:36:16 +0000 (0:00:00.899) 0:00:03.458 ******* 2026-02-13 04:36:19.359315 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:36:19.359340 | orchestrator | 2026-02-13 04:36:19.359351 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-13 04:36:19.359362 | orchestrator | Friday 13 February 2026 04:36:17 +0000 (0:00:00.573) 0:00:04.031 ******* 2026-02-13 04:36:19.359404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:19.359439 | orchestrator | 2026-02-13 04:36:19.359450 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-13 04:36:19.359461 | orchestrator | Friday 13 February 2026 04:36:18 +0000 (0:00:01.323) 0:00:05.355 ******* 2026-02-13 04:36:19.359472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:19.359484 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:36:19.359495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:19.359513 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:36:19.359539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:26.359387 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:36:26.359488 | orchestrator | 2026-02-13 04:36:26.359500 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-13 04:36:26.359510 | orchestrator | Friday 13 February 2026 04:36:19 +0000 (0:00:00.593) 0:00:05.949 ******* 2026-02-13 04:36:26.359520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:26.359532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:26.359541 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:36:26.359549 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:36:26.359557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-13 04:36:26.359566 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:36:26.359574 | orchestrator | 2026-02-13 04:36:26.359582 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-13 04:36:26.359590 | orchestrator | Friday 13 February 2026 04:36:19 +0000 (0:00:00.638) 0:00:06.587 ******* 2026-02-13 04:36:26.359598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359673 | orchestrator | 2026-02-13 04:36:26.359682 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-13 04:36:26.359690 | orchestrator | Friday 13 February 2026 04:36:21 +0000 (0:00:01.344) 0:00:07.932 ******* 2026-02-13 04:36:26.359698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:36:26.359729 | orchestrator | 2026-02-13 04:36:26.359737 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-13 04:36:26.359745 | orchestrator | Friday 13 February 2026 04:36:22 +0000 (0:00:01.606) 0:00:09.539 ******* 2026-02-13 04:36:26.359753 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:36:26.359761 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:36:26.359769 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:36:26.359777 | orchestrator | 2026-02-13 04:36:26.359785 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-13 04:36:26.359792 | orchestrator | Friday 13 February 2026 04:36:23 +0000 (0:00:00.372) 0:00:09.911 ******* 2026-02-13 04:36:26.359800 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-13 04:36:26.359810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-13 04:36:26.359817 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-13 04:36:26.359825 | orchestrator | 2026-02-13 04:36:26.359833 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-13 04:36:26.359841 | orchestrator | Friday 13 February 2026 04:36:24 +0000 (0:00:01.296) 0:00:11.208 ******* 2026-02-13 04:36:26.359903 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-13 04:36:26.359913 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-13 04:36:26.359925 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-13 04:36:26.359933 | orchestrator | 2026-02-13 04:36:26.359943 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-13 04:36:26.359958 | orchestrator | Friday 13 February 2026 04:36:26 +0000 (0:00:01.734) 0:00:12.943 ******* 2026-02-13 04:36:32.755927 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:36:32.756035 | orchestrator | 2026-02-13 04:36:32.756053 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-13 04:36:32.756066 | orchestrator | Friday 13 February 2026 04:36:27 +0000 (0:00:00.744) 0:00:13.688 ******* 2026-02-13 04:36:32.756077 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-13 04:36:32.756089 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-13 04:36:32.756100 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:36:32.756113 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:36:32.756124 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:36:32.756134 | orchestrator | 2026-02-13 04:36:32.756146 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-13 04:36:32.756157 | orchestrator | Friday 13 February 2026 04:36:27 +0000 (0:00:00.736) 0:00:14.424 ******* 2026-02-13 04:36:32.756168 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:36:32.756180 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:36:32.756191 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:36:32.756202 | orchestrator | 2026-02-13 04:36:32.756213 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-13 04:36:32.756224 | orchestrator | Friday 13 February 2026 04:36:28 +0000 (0:00:00.332) 0:00:14.756 ******* 2026-02-13 04:36:32.756239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098272, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7135074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098272, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7135074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098272, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7135074, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098390, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098390, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098390, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098309, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098309, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098309, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098393, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7285104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098393, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7285104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:32.756460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098393, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7285104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098346, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.722053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098346, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.722053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098346, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.722053, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098378, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7255104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098378, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7255104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098378, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7255104, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366748 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098268, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7108812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098268, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7108812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098268, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7108812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366825 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098297, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7145102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098297, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7145102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098314, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7169938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:36.366962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098314, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7169938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098297, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7145102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098362, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7235482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098362, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7235482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098314, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7169938, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098386, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098386, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098362, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7235482, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098302, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098302, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098386, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7265875, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098376, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7249253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098302, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7160263, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:40.840491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098376, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7249253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098351, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7230608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098376, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7249253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098351, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7230608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098336, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.721627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098351, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7230608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098336, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.721627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098327, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.719721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098327, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.719721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098336, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.721627, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098367, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7245681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098367, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7245681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098327, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.719721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:44.482593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098319, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7185102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098319, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7185102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098367, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7245681, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098383, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.726073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098383, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.726073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098319, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7185102, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624295 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098563, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7611253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098563, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7611253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098383, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.726073, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098445, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7401648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624345 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098445, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7401648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098563, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7611253, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098417, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.732223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:48.624387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098417, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.732223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098445, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7401648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098475, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098475, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098417, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.732223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098404, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7301545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098404, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7301545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098475, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098527, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7535112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098527, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7535112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098404, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7301545, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098480, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.749511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098480, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.749511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:52.637616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098527, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7535112, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098531, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7544298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098531, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7544298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098480, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.749511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098558, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7596738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098558, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7596738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098525, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7519581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.360992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098531, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7544298, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098525, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7519581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098465, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7413862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098465, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7413862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098558, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7596738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098433, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7366033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:36:56.361050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098433, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7366033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.563922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098525, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7519581, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098464, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7405107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564037 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098465, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7413862, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098464, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7405107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098422, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7343602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098422, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7343602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098433, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7366033, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564102 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098471, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098471, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098464, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7405107, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098549, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7586064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098549, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7586064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:00.564180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098422, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7343602, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098539, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.756483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098539, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.756483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098471, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7419155, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098406, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7305105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098406, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7305105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098549, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7586064, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098412, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7315252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098412, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7315252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098539, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.756483, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098515, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7515311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098515, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7515311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098406, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7305105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:37:04.254617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098536, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.754511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:38:44.517343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098536, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.754511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:38:44.517479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098412, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7315252, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:38:44.517500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098515, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.7515311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:38:44.517516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098536, 'dev': 79, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1770950271.754511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-13 04:38:44.517529 | orchestrator | 2026-02-13 04:38:44.517543 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-13 04:38:44.517556 | orchestrator | Friday 13 February 2026 04:37:07 +0000 (0:00:39.032) 0:00:53.789 ******* 2026-02-13 04:38:44.517569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:38:44.517623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:38:44.517636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-13 04:38:44.517648 | orchestrator | 2026-02-13 04:38:44.517660 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-13 04:38:44.517672 | orchestrator | Friday 13 February 2026 04:37:08 +0000 (0:00:01.018) 0:00:54.807 ******* 2026-02-13 04:38:44.517682 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:38:44.517694 | orchestrator | 2026-02-13 04:38:44.517704 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-13 04:38:44.517716 | orchestrator | Friday 13 February 2026 04:37:10 +0000 (0:00:02.263) 0:00:57.071 ******* 2026-02-13 04:38:44.517726 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:38:44.517737 | orchestrator | 2026-02-13 04:38:44.517748 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-13 04:38:44.517771 | orchestrator | Friday 13 February 2026 04:37:12 +0000 (0:00:02.174) 0:00:59.246 ******* 2026-02-13 04:38:44.517782 | orchestrator | 2026-02-13 04:38:44.517793 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-13 04:38:44.517804 | orchestrator | Friday 13 February 2026 04:37:12 +0000 (0:00:00.081) 0:00:59.328 ******* 2026-02-13 04:38:44.517815 | orchestrator | 2026-02-13 04:38:44.517826 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-13 04:38:44.517836 | orchestrator | Friday 13 February 2026 04:37:12 +0000 (0:00:00.081) 0:00:59.409 ******* 2026-02-13 04:38:44.517847 | orchestrator | 2026-02-13 04:38:44.517858 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-13 04:38:44.517871 | orchestrator | Friday 13 February 2026 04:37:12 +0000 (0:00:00.070) 0:00:59.480 ******* 2026-02-13 04:38:44.517883 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:38:44.517924 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:38:44.517938 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:38:44.517951 | orchestrator | 2026-02-13 04:38:44.517964 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-13 04:38:44.517977 | orchestrator | Friday 13 February 2026 04:37:14 +0000 (0:00:02.073) 0:01:01.554 ******* 2026-02-13 04:38:44.517990 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:38:44.518002 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:38:44.518015 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-13 04:38:44.518077 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-13 04:38:44.518090 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-13 04:38:44.518112 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-13 04:38:44.518125 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:38:44.518139 | orchestrator | 2026-02-13 04:38:44.518151 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-13 04:38:44.518164 | orchestrator | Friday 13 February 2026 04:38:05 +0000 (0:00:50.354) 0:01:51.908 ******* 2026-02-13 04:38:44.518176 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:38:44.518188 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:38:44.518201 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:38:44.518213 | orchestrator | 2026-02-13 04:38:44.518224 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-13 04:38:44.518235 | orchestrator | Friday 13 February 2026 04:38:39 +0000 (0:00:34.267) 0:02:26.176 ******* 2026-02-13 04:38:44.518245 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:38:44.518256 | orchestrator | 2026-02-13 04:38:44.518267 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-13 04:38:44.518278 | orchestrator | Friday 13 February 2026 04:38:41 +0000 (0:00:02.072) 0:02:28.248 ******* 2026-02-13 04:38:44.518289 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:38:44.518300 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:38:44.518310 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:38:44.518321 | orchestrator | 2026-02-13 04:38:44.518332 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-13 04:38:44.518342 | orchestrator | Friday 13 February 2026 04:38:41 +0000 (0:00:00.310) 0:02:28.559 ******* 2026-02-13 04:38:44.518354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-13 04:38:44.518376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-13 04:38:45.191510 | orchestrator | 2026-02-13 04:38:45.191598 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-13 04:38:45.191609 | orchestrator | Friday 13 February 2026 04:38:44 +0000 (0:00:02.540) 0:02:31.099 ******* 2026-02-13 04:38:45.191616 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:38:45.191624 | orchestrator | 2026-02-13 04:38:45.191631 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:38:45.191639 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:38:45.191648 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:38:45.191654 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-13 04:38:45.191661 | orchestrator | 2026-02-13 04:38:45.191667 | orchestrator | 2026-02-13 04:38:45.191674 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:38:45.191681 | orchestrator | Friday 13 February 2026 04:38:44 +0000 (0:00:00.303) 0:02:31.403 ******* 2026-02-13 04:38:45.191687 | orchestrator | =============================================================================== 2026-02-13 04:38:45.191694 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.35s 2026-02-13 04:38:45.191715 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 39.03s 2026-02-13 04:38:45.191739 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.27s 2026-02-13 04:38:45.191746 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.54s 2026-02-13 04:38:45.191753 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.26s 2026-02-13 04:38:45.191760 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.17s 2026-02-13 04:38:45.191766 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.07s 2026-02-13 04:38:45.191773 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.07s 2026-02-13 04:38:45.191779 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.73s 2026-02-13 04:38:45.191786 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.61s 2026-02-13 04:38:45.191792 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.34s 2026-02-13 04:38:45.191799 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.32s 2026-02-13 04:38:45.191806 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.30s 2026-02-13 04:38:45.191812 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2026-02-13 04:38:45.191819 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2026-02-13 04:38:45.191825 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.90s 2026-02-13 04:38:45.191832 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2026-02-13 04:38:45.191838 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.74s 2026-02-13 04:38:45.191845 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.64s 2026-02-13 04:38:45.191851 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.59s 2026-02-13 04:38:45.523533 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-13 04:38:45.529736 | orchestrator | + set -e 2026-02-13 04:38:45.529827 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:38:45.530392 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:38:45.530441 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:38:45.530458 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:38:45.530473 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:38:45.530488 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 04:38:45.531680 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 04:38:45.531771 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 04:38:45.531794 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 04:38:45.531813 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 04:38:45.531833 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 04:38:45.531854 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 04:38:45.531871 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:38:45.531920 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:38:45.531940 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 04:38:45.531959 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 04:38:45.531978 | orchestrator | ++ export ARA=false 2026-02-13 04:38:45.531998 | orchestrator | ++ ARA=false 2026-02-13 04:38:45.532018 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 04:38:45.532030 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 04:38:45.532041 | orchestrator | ++ export TEMPEST=false 2026-02-13 04:38:45.532052 | orchestrator | ++ TEMPEST=false 2026-02-13 04:38:45.532062 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 04:38:45.532073 | orchestrator | ++ IS_ZUUL=true 2026-02-13 04:38:45.532083 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:38:45.532095 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:38:45.532106 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 04:38:45.532117 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 04:38:45.532127 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 04:38:45.532137 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 04:38:45.532148 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 04:38:45.532159 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 04:38:45.532170 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 04:38:45.532180 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 04:38:45.532647 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-13 04:38:45.600427 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 04:38:45.600536 | orchestrator | + osism apply clusterapi 2026-02-13 04:38:47.893459 | orchestrator | 2026-02-13 04:38:47 | INFO  | Task fba1abfc-3dd6-4d8a-83e5-ea0970070fcf (clusterapi) was prepared for execution. 2026-02-13 04:38:47.893562 | orchestrator | 2026-02-13 04:38:47 | INFO  | It takes a moment until task fba1abfc-3dd6-4d8a-83e5-ea0970070fcf (clusterapi) has been started and output is visible here. 2026-02-13 04:39:41.103444 | orchestrator | 2026-02-13 04:39:41.103545 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-13 04:39:41.103557 | orchestrator | 2026-02-13 04:39:41.103563 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-13 04:39:41.103571 | orchestrator | Friday 13 February 2026 04:38:52 +0000 (0:00:00.207) 0:00:00.207 ******* 2026-02-13 04:39:41.103579 | orchestrator | included: cert_manager for testbed-manager 2026-02-13 04:39:41.103585 | orchestrator | 2026-02-13 04:39:41.103592 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-13 04:39:41.103599 | orchestrator | Friday 13 February 2026 04:38:52 +0000 (0:00:00.250) 0:00:00.457 ******* 2026-02-13 04:39:41.103605 | orchestrator | changed: [testbed-manager] 2026-02-13 04:39:41.103613 | orchestrator | 2026-02-13 04:39:41.103620 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-13 04:39:41.103626 | orchestrator | Friday 13 February 2026 04:38:57 +0000 (0:00:05.364) 0:00:05.822 ******* 2026-02-13 04:39:41.103633 | orchestrator | changed: [testbed-manager] 2026-02-13 04:39:41.103639 | orchestrator | 2026-02-13 04:39:41.103645 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-13 04:39:41.103651 | orchestrator | 2026-02-13 04:39:41.103657 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-13 04:39:41.103663 | orchestrator | Friday 13 February 2026 04:39:21 +0000 (0:00:23.649) 0:00:29.471 ******* 2026-02-13 04:39:41.103669 | orchestrator | ok: [testbed-manager] 2026-02-13 04:39:41.103676 | orchestrator | 2026-02-13 04:39:41.103682 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-13 04:39:41.103689 | orchestrator | Friday 13 February 2026 04:39:22 +0000 (0:00:01.113) 0:00:30.585 ******* 2026-02-13 04:39:41.103694 | orchestrator | ok: [testbed-manager] 2026-02-13 04:39:41.103700 | orchestrator | 2026-02-13 04:39:41.103721 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-13 04:39:41.103729 | orchestrator | Friday 13 February 2026 04:39:22 +0000 (0:00:00.157) 0:00:30.742 ******* 2026-02-13 04:39:41.103736 | orchestrator | ok: [testbed-manager] 2026-02-13 04:39:41.103742 | orchestrator | 2026-02-13 04:39:41.103749 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-13 04:39:41.103755 | orchestrator | Friday 13 February 2026 04:39:38 +0000 (0:00:15.721) 0:00:46.463 ******* 2026-02-13 04:39:41.103761 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:39:41.103767 | orchestrator | 2026-02-13 04:39:41.103773 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-13 04:39:41.103780 | orchestrator | Friday 13 February 2026 04:39:38 +0000 (0:00:00.143) 0:00:46.607 ******* 2026-02-13 04:39:41.103786 | orchestrator | changed: [testbed-manager] 2026-02-13 04:39:41.103792 | orchestrator | 2026-02-13 04:39:41.103798 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:39:41.103804 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 04:39:41.103812 | orchestrator | 2026-02-13 04:39:41.103819 | orchestrator | 2026-02-13 04:39:41.103825 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:39:41.103831 | orchestrator | Friday 13 February 2026 04:39:40 +0000 (0:00:02.083) 0:00:48.690 ******* 2026-02-13 04:39:41.103837 | orchestrator | =============================================================================== 2026-02-13 04:39:41.103844 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.65s 2026-02-13 04:39:41.103870 | orchestrator | Initialize the CAPI management cluster --------------------------------- 15.72s 2026-02-13 04:39:41.103877 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.36s 2026-02-13 04:39:41.103883 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.08s 2026-02-13 04:39:41.103889 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-13 04:39:41.103895 | orchestrator | Include cert_manager role ----------------------------------------------- 0.25s 2026-02-13 04:39:41.103933 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.16s 2026-02-13 04:39:41.103940 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.14s 2026-02-13 04:39:41.415089 | orchestrator | + osism apply magnum 2026-02-13 04:39:43.465526 | orchestrator | 2026-02-13 04:39:43 | INFO  | Task 8c196d97-b02b-4fbf-b713-db6e72687399 (magnum) was prepared for execution. 2026-02-13 04:39:43.465609 | orchestrator | 2026-02-13 04:39:43 | INFO  | It takes a moment until task 8c196d97-b02b-4fbf-b713-db6e72687399 (magnum) has been started and output is visible here. 2026-02-13 04:40:25.742528 | orchestrator | 2026-02-13 04:40:25.742614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:40:25.742623 | orchestrator | 2026-02-13 04:40:25.742630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:40:25.742636 | orchestrator | Friday 13 February 2026 04:39:47 +0000 (0:00:00.285) 0:00:00.285 ******* 2026-02-13 04:40:25.742641 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:40:25.742648 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:40:25.742653 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:40:25.742658 | orchestrator | 2026-02-13 04:40:25.742663 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:40:25.742668 | orchestrator | Friday 13 February 2026 04:39:48 +0000 (0:00:00.331) 0:00:00.616 ******* 2026-02-13 04:40:25.742673 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-13 04:40:25.742678 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-13 04:40:25.742683 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-13 04:40:25.742688 | orchestrator | 2026-02-13 04:40:25.742693 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-13 04:40:25.742698 | orchestrator | 2026-02-13 04:40:25.742703 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-13 04:40:25.742708 | orchestrator | Friday 13 February 2026 04:39:48 +0000 (0:00:00.482) 0:00:01.099 ******* 2026-02-13 04:40:25.742712 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:40:25.742718 | orchestrator | 2026-02-13 04:40:25.742723 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-13 04:40:25.742728 | orchestrator | Friday 13 February 2026 04:39:49 +0000 (0:00:00.587) 0:00:01.686 ******* 2026-02-13 04:40:25.742733 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-13 04:40:25.742738 | orchestrator | 2026-02-13 04:40:25.742743 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-13 04:40:25.742748 | orchestrator | Friday 13 February 2026 04:39:52 +0000 (0:00:03.651) 0:00:05.338 ******* 2026-02-13 04:40:25.742753 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-13 04:40:25.742758 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-13 04:40:25.742763 | orchestrator | 2026-02-13 04:40:25.742768 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-13 04:40:25.742773 | orchestrator | Friday 13 February 2026 04:39:59 +0000 (0:00:06.402) 0:00:11.740 ******* 2026-02-13 04:40:25.742778 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-13 04:40:25.742783 | orchestrator | 2026-02-13 04:40:25.742788 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-13 04:40:25.742810 | orchestrator | Friday 13 February 2026 04:40:02 +0000 (0:00:03.298) 0:00:15.038 ******* 2026-02-13 04:40:25.742826 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-13 04:40:25.742831 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-13 04:40:25.742836 | orchestrator | 2026-02-13 04:40:25.742841 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-13 04:40:25.742846 | orchestrator | Friday 13 February 2026 04:40:06 +0000 (0:00:03.788) 0:00:18.827 ******* 2026-02-13 04:40:25.742850 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-13 04:40:25.742855 | orchestrator | 2026-02-13 04:40:25.742860 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-13 04:40:25.742865 | orchestrator | Friday 13 February 2026 04:40:09 +0000 (0:00:03.277) 0:00:22.104 ******* 2026-02-13 04:40:25.742870 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-13 04:40:25.742874 | orchestrator | 2026-02-13 04:40:25.742879 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-13 04:40:25.742884 | orchestrator | Friday 13 February 2026 04:40:13 +0000 (0:00:03.785) 0:00:25.890 ******* 2026-02-13 04:40:25.742889 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:40:25.742893 | orchestrator | 2026-02-13 04:40:25.742898 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-13 04:40:25.742903 | orchestrator | Friday 13 February 2026 04:40:16 +0000 (0:00:03.301) 0:00:29.191 ******* 2026-02-13 04:40:25.742908 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:40:25.742912 | orchestrator | 2026-02-13 04:40:25.742917 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-13 04:40:25.742922 | orchestrator | Friday 13 February 2026 04:40:20 +0000 (0:00:03.889) 0:00:33.080 ******* 2026-02-13 04:40:25.742927 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:40:25.742932 | orchestrator | 2026-02-13 04:40:25.742936 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-13 04:40:25.742941 | orchestrator | Friday 13 February 2026 04:40:24 +0000 (0:00:03.496) 0:00:36.576 ******* 2026-02-13 04:40:25.742958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:25.742967 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:25.742977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:25.742986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:25.742992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:25.743000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:33.272527 | orchestrator | 2026-02-13 04:40:33.272635 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-13 04:40:33.272653 | orchestrator | Friday 13 February 2026 04:40:25 +0000 (0:00:01.616) 0:00:38.193 ******* 2026-02-13 04:40:33.272665 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:40:33.272678 | orchestrator | 2026-02-13 04:40:33.272690 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-13 04:40:33.272701 | orchestrator | Friday 13 February 2026 04:40:25 +0000 (0:00:00.142) 0:00:38.335 ******* 2026-02-13 04:40:33.272712 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:40:33.272723 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:40:33.272734 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:40:33.272768 | orchestrator | 2026-02-13 04:40:33.272780 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-13 04:40:33.272790 | orchestrator | Friday 13 February 2026 04:40:26 +0000 (0:00:00.290) 0:00:38.625 ******* 2026-02-13 04:40:33.272801 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 04:40:33.272812 | orchestrator | 2026-02-13 04:40:33.272822 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-13 04:40:33.272833 | orchestrator | Friday 13 February 2026 04:40:27 +0000 (0:00:00.883) 0:00:39.509 ******* 2026-02-13 04:40:33.272846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:33.272876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:33.272889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:33.272919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:33.272940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:33.272951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:33.272963 | orchestrator | 2026-02-13 04:40:33.272974 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-13 04:40:33.272990 | orchestrator | Friday 13 February 2026 04:40:29 +0000 (0:00:02.504) 0:00:42.013 ******* 2026-02-13 04:40:33.273001 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:40:33.273014 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:40:33.273024 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:40:33.273062 | orchestrator | 2026-02-13 04:40:33.273075 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-13 04:40:33.273088 | orchestrator | Friday 13 February 2026 04:40:30 +0000 (0:00:00.492) 0:00:42.505 ******* 2026-02-13 04:40:33.273101 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:40:33.273114 | orchestrator | 2026-02-13 04:40:33.273126 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-13 04:40:33.273139 | orchestrator | Friday 13 February 2026 04:40:30 +0000 (0:00:00.579) 0:00:43.084 ******* 2026-02-13 04:40:33.273152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:33.273175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:34.126563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:34.126666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:34.126700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:34.126712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:34.126724 | orchestrator | 2026-02-13 04:40:34.126737 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-13 04:40:34.126750 | orchestrator | Friday 13 February 2026 04:40:33 +0000 (0:00:02.645) 0:00:45.730 ******* 2026-02-13 04:40:34.126779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:34.126813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:34.126826 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:40:34.126844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:34.126856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:34.126868 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:40:34.126879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:34.126905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:37.671615 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:40:37.671740 | orchestrator | 2026-02-13 04:40:37.671757 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-13 04:40:37.671771 | orchestrator | Friday 13 February 2026 04:40:34 +0000 (0:00:00.852) 0:00:46.583 ******* 2026-02-13 04:40:37.671785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:37.671818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:37.671831 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:40:37.671843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:37.671875 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:37.671887 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:40:37.671917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:37.671930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:37.671968 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:40:37.671980 | orchestrator | 2026-02-13 04:40:37.671991 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-13 04:40:37.672002 | orchestrator | Friday 13 February 2026 04:40:34 +0000 (0:00:00.862) 0:00:47.445 ******* 2026-02-13 04:40:37.672020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:37.672058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:37.672088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:43.798843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.798980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.798999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.799033 | orchestrator | 2026-02-13 04:40:43.799118 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-13 04:40:43.799132 | orchestrator | Friday 13 February 2026 04:40:37 +0000 (0:00:02.684) 0:00:50.129 ******* 2026-02-13 04:40:43.799144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:43.799176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:43.799189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:43.799207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.799219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.799238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:43.799250 | orchestrator | 2026-02-13 04:40:43.799261 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-13 04:40:43.799272 | orchestrator | Friday 13 February 2026 04:40:43 +0000 (0:00:05.470) 0:00:55.599 ******* 2026-02-13 04:40:43.799293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:45.804914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:45.805019 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:40:45.805134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:45.805176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:45.805188 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:40:45.805200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-13 04:40:45.805232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 04:40:45.805244 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:40:45.805255 | orchestrator | 2026-02-13 04:40:45.805268 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-13 04:40:45.805280 | orchestrator | Friday 13 February 2026 04:40:43 +0000 (0:00:00.658) 0:00:56.258 ******* 2026-02-13 04:40:45.805297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:45.805317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:45.805329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-13 04:40:45.805341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:40:45.805361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:41:44.242743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-13 04:41:44.242888 | orchestrator | 2026-02-13 04:41:44.242906 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-13 04:41:44.242920 | orchestrator | Friday 13 February 2026 04:40:45 +0000 (0:00:02.004) 0:00:58.262 ******* 2026-02-13 04:41:44.242931 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:41:44.242943 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:41:44.242954 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:41:44.242965 | orchestrator | 2026-02-13 04:41:44.242976 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-13 04:41:44.242987 | orchestrator | Friday 13 February 2026 04:40:46 +0000 (0:00:00.507) 0:00:58.770 ******* 2026-02-13 04:41:44.242997 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:41:44.243008 | orchestrator | 2026-02-13 04:41:44.243019 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-13 04:41:44.243030 | orchestrator | Friday 13 February 2026 04:40:48 +0000 (0:00:02.192) 0:01:00.963 ******* 2026-02-13 04:41:44.243040 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:41:44.243051 | orchestrator | 2026-02-13 04:41:44.243062 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-13 04:41:44.243073 | orchestrator | Friday 13 February 2026 04:40:50 +0000 (0:00:02.293) 0:01:03.256 ******* 2026-02-13 04:41:44.243083 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:41:44.243094 | orchestrator | 2026-02-13 04:41:44.243179 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-13 04:41:44.243193 | orchestrator | Friday 13 February 2026 04:41:07 +0000 (0:00:16.389) 0:01:19.646 ******* 2026-02-13 04:41:44.243204 | orchestrator | 2026-02-13 04:41:44.243215 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-13 04:41:44.243225 | orchestrator | Friday 13 February 2026 04:41:07 +0000 (0:00:00.077) 0:01:19.723 ******* 2026-02-13 04:41:44.243236 | orchestrator | 2026-02-13 04:41:44.243247 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-13 04:41:44.243260 | orchestrator | Friday 13 February 2026 04:41:07 +0000 (0:00:00.078) 0:01:19.801 ******* 2026-02-13 04:41:44.243271 | orchestrator | 2026-02-13 04:41:44.243284 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-13 04:41:44.243297 | orchestrator | Friday 13 February 2026 04:41:07 +0000 (0:00:00.073) 0:01:19.875 ******* 2026-02-13 04:41:44.243309 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:41:44.243322 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:41:44.243334 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:41:44.243346 | orchestrator | 2026-02-13 04:41:44.243358 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-13 04:41:44.243371 | orchestrator | Friday 13 February 2026 04:41:28 +0000 (0:00:20.643) 0:01:40.518 ******* 2026-02-13 04:41:44.243383 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:41:44.243395 | orchestrator | changed: [testbed-node-2] 2026-02-13 04:41:44.243408 | orchestrator | changed: [testbed-node-1] 2026-02-13 04:41:44.243420 | orchestrator | 2026-02-13 04:41:44.243432 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:41:44.243446 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 04:41:44.243460 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:41:44.243474 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-13 04:41:44.243487 | orchestrator | 2026-02-13 04:41:44.243498 | orchestrator | 2026-02-13 04:41:44.243509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:41:44.243519 | orchestrator | Friday 13 February 2026 04:41:43 +0000 (0:00:15.816) 0:01:56.334 ******* 2026-02-13 04:41:44.243539 | orchestrator | =============================================================================== 2026-02-13 04:41:44.243549 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.64s 2026-02-13 04:41:44.243560 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.39s 2026-02-13 04:41:44.243571 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.82s 2026-02-13 04:41:44.243582 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.40s 2026-02-13 04:41:44.243592 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.47s 2026-02-13 04:41:44.243603 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.89s 2026-02-13 04:41:44.243614 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.79s 2026-02-13 04:41:44.243643 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.79s 2026-02-13 04:41:44.243654 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.65s 2026-02-13 04:41:44.243665 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.50s 2026-02-13 04:41:44.243676 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.30s 2026-02-13 04:41:44.243686 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.30s 2026-02-13 04:41:44.243697 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.28s 2026-02-13 04:41:44.243717 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.68s 2026-02-13 04:41:44.243735 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.65s 2026-02-13 04:41:44.243752 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.50s 2026-02-13 04:41:44.243780 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.29s 2026-02-13 04:41:44.243802 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2026-02-13 04:41:44.243821 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.00s 2026-02-13 04:41:44.243840 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.62s 2026-02-13 04:41:44.866963 | orchestrator | ok: Runtime: 1:41:12.203034 2026-02-13 04:41:45.119744 | 2026-02-13 04:41:45.119921 | TASK [Deploy in a nutshell] 2026-02-13 04:41:45.655311 | orchestrator | skipping: Conditional result was False 2026-02-13 04:41:45.679006 | 2026-02-13 04:41:45.679154 | TASK [Bootstrap services] 2026-02-13 04:41:46.393647 | orchestrator | 2026-02-13 04:41:46.393771 | orchestrator | # BOOTSTRAP 2026-02-13 04:41:46.393781 | orchestrator | 2026-02-13 04:41:46.393786 | orchestrator | + set -e 2026-02-13 04:41:46.393791 | orchestrator | + echo 2026-02-13 04:41:46.393797 | orchestrator | + echo '# BOOTSTRAP' 2026-02-13 04:41:46.393804 | orchestrator | + echo 2026-02-13 04:41:46.393826 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-13 04:41:46.402233 | orchestrator | + set -e 2026-02-13 04:41:46.402278 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-13 04:41:48.738637 | orchestrator | 2026-02-13 04:41:48 | INFO  | It takes a moment until task e795d82b-6739-4613-b218-d1cb8dbba118 (flavor-manager) has been started and output is visible here. 2026-02-13 04:41:56.283041 | orchestrator | 2026-02-13 04:41:51 | INFO  | Flavor SCS-1L-1 created 2026-02-13 04:41:56.283243 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1L-1-5 created 2026-02-13 04:41:56.283263 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-2 created 2026-02-13 04:41:56.283276 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-2-5 created 2026-02-13 04:41:56.283288 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-4 created 2026-02-13 04:41:56.283299 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-4-10 created 2026-02-13 04:41:56.283311 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-8 created 2026-02-13 04:41:56.283323 | orchestrator | 2026-02-13 04:41:52 | INFO  | Flavor SCS-1V-8-20 created 2026-02-13 04:41:56.283349 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-4 created 2026-02-13 04:41:56.283361 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-4-10 created 2026-02-13 04:41:56.283372 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-8 created 2026-02-13 04:41:56.283384 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-8-20 created 2026-02-13 04:41:56.283395 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-16 created 2026-02-13 04:41:56.283406 | orchestrator | 2026-02-13 04:41:53 | INFO  | Flavor SCS-2V-16-50 created 2026-02-13 04:41:56.283417 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-8 created 2026-02-13 04:41:56.283429 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-8-20 created 2026-02-13 04:41:56.283440 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-16 created 2026-02-13 04:41:56.283451 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-16-50 created 2026-02-13 04:41:56.283462 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-32 created 2026-02-13 04:41:56.283473 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-4V-32-100 created 2026-02-13 04:41:56.283485 | orchestrator | 2026-02-13 04:41:54 | INFO  | Flavor SCS-8V-16 created 2026-02-13 04:41:56.283496 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-8V-16-50 created 2026-02-13 04:41:56.283508 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-8V-32 created 2026-02-13 04:41:56.283519 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-8V-32-100 created 2026-02-13 04:41:56.283530 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-16V-32 created 2026-02-13 04:41:56.283542 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-16V-32-100 created 2026-02-13 04:41:56.283553 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-2V-4-20s created 2026-02-13 04:41:56.283564 | orchestrator | 2026-02-13 04:41:55 | INFO  | Flavor SCS-4V-8-50s created 2026-02-13 04:41:56.283576 | orchestrator | 2026-02-13 04:41:56 | INFO  | Flavor SCS-8V-32-100s created 2026-02-13 04:41:58.593598 | orchestrator | 2026-02-13 04:41:58 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-13 04:42:08.762212 | orchestrator | 2026-02-13 04:42:08 | INFO  | Task 250bb0e1-aec8-4aa5-a77f-4558e3346cbf (bootstrap-basic) was prepared for execution. 2026-02-13 04:42:08.762356 | orchestrator | 2026-02-13 04:42:08 | INFO  | It takes a moment until task 250bb0e1-aec8-4aa5-a77f-4558e3346cbf (bootstrap-basic) has been started and output is visible here. 2026-02-13 04:42:53.066894 | orchestrator | 2026-02-13 04:42:53.067031 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-13 04:42:53.067057 | orchestrator | 2026-02-13 04:42:53.067072 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 04:42:53.067088 | orchestrator | Friday 13 February 2026 04:42:13 +0000 (0:00:00.087) 0:00:00.087 ******* 2026-02-13 04:42:53.067103 | orchestrator | ok: [localhost] 2026-02-13 04:42:53.067120 | orchestrator | 2026-02-13 04:42:53.067133 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-13 04:42:53.067147 | orchestrator | Friday 13 February 2026 04:42:15 +0000 (0:00:01.954) 0:00:02.042 ******* 2026-02-13 04:42:53.067242 | orchestrator | ok: [localhost] 2026-02-13 04:42:53.067261 | orchestrator | 2026-02-13 04:42:53.067276 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-13 04:42:53.067292 | orchestrator | Friday 13 February 2026 04:42:22 +0000 (0:00:06.869) 0:00:08.911 ******* 2026-02-13 04:42:53.067307 | orchestrator | changed: [localhost] 2026-02-13 04:42:53.067322 | orchestrator | 2026-02-13 04:42:53.067337 | orchestrator | TASK [Create public network] *************************************************** 2026-02-13 04:42:53.067353 | orchestrator | Friday 13 February 2026 04:42:28 +0000 (0:00:06.400) 0:00:15.312 ******* 2026-02-13 04:42:53.067367 | orchestrator | changed: [localhost] 2026-02-13 04:42:53.067381 | orchestrator | 2026-02-13 04:42:53.067396 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-13 04:42:53.067411 | orchestrator | Friday 13 February 2026 04:42:33 +0000 (0:00:05.204) 0:00:20.517 ******* 2026-02-13 04:42:53.067433 | orchestrator | changed: [localhost] 2026-02-13 04:42:53.067449 | orchestrator | 2026-02-13 04:42:53.067464 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-13 04:42:53.067479 | orchestrator | Friday 13 February 2026 04:42:40 +0000 (0:00:06.278) 0:00:26.795 ******* 2026-02-13 04:42:53.067495 | orchestrator | changed: [localhost] 2026-02-13 04:42:53.067510 | orchestrator | 2026-02-13 04:42:53.067525 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-13 04:42:53.067539 | orchestrator | Friday 13 February 2026 04:42:45 +0000 (0:00:04.980) 0:00:31.776 ******* 2026-02-13 04:42:53.067555 | orchestrator | changed: [localhost] 2026-02-13 04:42:53.067570 | orchestrator | 2026-02-13 04:42:53.067585 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-13 04:42:53.067616 | orchestrator | Friday 13 February 2026 04:42:49 +0000 (0:00:04.040) 0:00:35.817 ******* 2026-02-13 04:42:53.067632 | orchestrator | ok: [localhost] 2026-02-13 04:42:53.067645 | orchestrator | 2026-02-13 04:42:53.067659 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:42:53.067673 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 04:42:53.067688 | orchestrator | 2026-02-13 04:42:53.067703 | orchestrator | 2026-02-13 04:42:53.067718 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:42:53.067731 | orchestrator | Friday 13 February 2026 04:42:52 +0000 (0:00:03.698) 0:00:39.515 ******* 2026-02-13 04:42:53.067746 | orchestrator | =============================================================================== 2026-02-13 04:42:53.067760 | orchestrator | Get volume type LUKS ---------------------------------------------------- 6.87s 2026-02-13 04:42:53.067775 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.40s 2026-02-13 04:42:53.067790 | orchestrator | Set public network to default ------------------------------------------- 6.28s 2026-02-13 04:42:53.067805 | orchestrator | Create public network --------------------------------------------------- 5.21s 2026-02-13 04:42:53.067852 | orchestrator | Create public subnet ---------------------------------------------------- 4.98s 2026-02-13 04:42:53.067869 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.04s 2026-02-13 04:42:53.067884 | orchestrator | Create manager role ----------------------------------------------------- 3.70s 2026-02-13 04:42:53.067898 | orchestrator | Gathering Facts --------------------------------------------------------- 1.95s 2026-02-13 04:42:55.505111 | orchestrator | 2026-02-13 04:42:55 | INFO  | It takes a moment until task 4779b13e-499e-4681-bc30-bc25f9dd8519 (image-manager) has been started and output is visible here. 2026-02-13 04:43:39.495091 | orchestrator | 2026-02-13 04:42:58 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-13 04:43:39.495217 | orchestrator | 2026-02-13 04:42:58 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-13 04:43:39.495232 | orchestrator | 2026-02-13 04:42:58 | INFO  | Importing image Cirros 0.6.2 2026-02-13 04:43:39.495239 | orchestrator | 2026-02-13 04:42:58 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-13 04:43:39.495247 | orchestrator | 2026-02-13 04:43:00 | INFO  | Waiting for image to leave queued state... 2026-02-13 04:43:39.495255 | orchestrator | 2026-02-13 04:43:02 | INFO  | Waiting for import to complete... 2026-02-13 04:43:39.495261 | orchestrator | 2026-02-13 04:43:13 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-13 04:43:39.495269 | orchestrator | 2026-02-13 04:43:13 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-13 04:43:39.495275 | orchestrator | 2026-02-13 04:43:13 | INFO  | Setting internal_version = 0.6.2 2026-02-13 04:43:39.495282 | orchestrator | 2026-02-13 04:43:13 | INFO  | Setting image_original_user = cirros 2026-02-13 04:43:39.495290 | orchestrator | 2026-02-13 04:43:13 | INFO  | Adding tag os:cirros 2026-02-13 04:43:39.495297 | orchestrator | 2026-02-13 04:43:13 | INFO  | Setting property architecture: x86_64 2026-02-13 04:43:39.495304 | orchestrator | 2026-02-13 04:43:14 | INFO  | Setting property hw_disk_bus: scsi 2026-02-13 04:43:39.495311 | orchestrator | 2026-02-13 04:43:14 | INFO  | Setting property hw_rng_model: virtio 2026-02-13 04:43:39.495318 | orchestrator | 2026-02-13 04:43:14 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-13 04:43:39.495325 | orchestrator | 2026-02-13 04:43:14 | INFO  | Setting property hw_watchdog_action: reset 2026-02-13 04:43:39.495331 | orchestrator | 2026-02-13 04:43:15 | INFO  | Setting property hypervisor_type: qemu 2026-02-13 04:43:39.495338 | orchestrator | 2026-02-13 04:43:15 | INFO  | Setting property os_distro: cirros 2026-02-13 04:43:39.495344 | orchestrator | 2026-02-13 04:43:15 | INFO  | Setting property os_purpose: minimal 2026-02-13 04:43:39.495351 | orchestrator | 2026-02-13 04:43:15 | INFO  | Setting property replace_frequency: never 2026-02-13 04:43:39.495357 | orchestrator | 2026-02-13 04:43:16 | INFO  | Setting property uuid_validity: none 2026-02-13 04:43:39.495364 | orchestrator | 2026-02-13 04:43:16 | INFO  | Setting property provided_until: none 2026-02-13 04:43:39.495370 | orchestrator | 2026-02-13 04:43:16 | INFO  | Setting property image_description: Cirros 2026-02-13 04:43:39.495378 | orchestrator | 2026-02-13 04:43:16 | INFO  | Setting property image_name: Cirros 2026-02-13 04:43:39.495382 | orchestrator | 2026-02-13 04:43:17 | INFO  | Setting property internal_version: 0.6.2 2026-02-13 04:43:39.495386 | orchestrator | 2026-02-13 04:43:17 | INFO  | Setting property image_original_user: cirros 2026-02-13 04:43:39.495410 | orchestrator | 2026-02-13 04:43:17 | INFO  | Setting property os_version: 0.6.2 2026-02-13 04:43:39.495425 | orchestrator | 2026-02-13 04:43:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-13 04:43:39.495433 | orchestrator | 2026-02-13 04:43:18 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-13 04:43:39.495439 | orchestrator | 2026-02-13 04:43:18 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-13 04:43:39.495445 | orchestrator | 2026-02-13 04:43:18 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-13 04:43:39.495450 | orchestrator | 2026-02-13 04:43:18 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-13 04:43:39.495457 | orchestrator | 2026-02-13 04:43:19 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-13 04:43:39.495466 | orchestrator | 2026-02-13 04:43:19 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-13 04:43:39.495472 | orchestrator | 2026-02-13 04:43:19 | INFO  | Importing image Cirros 0.6.3 2026-02-13 04:43:39.495478 | orchestrator | 2026-02-13 04:43:19 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-13 04:43:39.495484 | orchestrator | 2026-02-13 04:43:20 | INFO  | Waiting for image to leave queued state... 2026-02-13 04:43:39.495490 | orchestrator | 2026-02-13 04:43:22 | INFO  | Waiting for import to complete... 2026-02-13 04:43:39.495513 | orchestrator | 2026-02-13 04:43:33 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-13 04:43:39.495520 | orchestrator | 2026-02-13 04:43:33 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-13 04:43:39.495526 | orchestrator | 2026-02-13 04:43:33 | INFO  | Setting internal_version = 0.6.3 2026-02-13 04:43:39.495532 | orchestrator | 2026-02-13 04:43:33 | INFO  | Setting image_original_user = cirros 2026-02-13 04:43:39.495539 | orchestrator | 2026-02-13 04:43:33 | INFO  | Adding tag os:cirros 2026-02-13 04:43:39.495544 | orchestrator | 2026-02-13 04:43:33 | INFO  | Setting property architecture: x86_64 2026-02-13 04:43:39.495548 | orchestrator | 2026-02-13 04:43:34 | INFO  | Setting property hw_disk_bus: scsi 2026-02-13 04:43:39.495552 | orchestrator | 2026-02-13 04:43:34 | INFO  | Setting property hw_rng_model: virtio 2026-02-13 04:43:39.495555 | orchestrator | 2026-02-13 04:43:34 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-13 04:43:39.495559 | orchestrator | 2026-02-13 04:43:34 | INFO  | Setting property hw_watchdog_action: reset 2026-02-13 04:43:39.495563 | orchestrator | 2026-02-13 04:43:34 | INFO  | Setting property hypervisor_type: qemu 2026-02-13 04:43:39.495567 | orchestrator | 2026-02-13 04:43:35 | INFO  | Setting property os_distro: cirros 2026-02-13 04:43:39.495571 | orchestrator | 2026-02-13 04:43:35 | INFO  | Setting property os_purpose: minimal 2026-02-13 04:43:39.495575 | orchestrator | 2026-02-13 04:43:35 | INFO  | Setting property replace_frequency: never 2026-02-13 04:43:39.495578 | orchestrator | 2026-02-13 04:43:35 | INFO  | Setting property uuid_validity: none 2026-02-13 04:43:39.495582 | orchestrator | 2026-02-13 04:43:36 | INFO  | Setting property provided_until: none 2026-02-13 04:43:39.495587 | orchestrator | 2026-02-13 04:43:36 | INFO  | Setting property image_description: Cirros 2026-02-13 04:43:39.495591 | orchestrator | 2026-02-13 04:43:36 | INFO  | Setting property image_name: Cirros 2026-02-13 04:43:39.495596 | orchestrator | 2026-02-13 04:43:37 | INFO  | Setting property internal_version: 0.6.3 2026-02-13 04:43:39.495619 | orchestrator | 2026-02-13 04:43:37 | INFO  | Setting property image_original_user: cirros 2026-02-13 04:43:39.495624 | orchestrator | 2026-02-13 04:43:37 | INFO  | Setting property os_version: 0.6.3 2026-02-13 04:43:39.495628 | orchestrator | 2026-02-13 04:43:37 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-13 04:43:39.495633 | orchestrator | 2026-02-13 04:43:38 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-13 04:43:39.495637 | orchestrator | 2026-02-13 04:43:38 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-13 04:43:39.495641 | orchestrator | 2026-02-13 04:43:38 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-13 04:43:39.495646 | orchestrator | 2026-02-13 04:43:38 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-13 04:43:39.815156 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-13 04:43:42.014626 | orchestrator | 2026-02-13 04:43:42 | INFO  | date: 2026-02-13 2026-02-13 04:43:42.014748 | orchestrator | 2026-02-13 04:43:42 | INFO  | image: octavia-amphora-haproxy-2024.2.20260213.qcow2 2026-02-13 04:43:42.014788 | orchestrator | 2026-02-13 04:43:42 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260213.qcow2 2026-02-13 04:43:42.014803 | orchestrator | 2026-02-13 04:43:42 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260213.qcow2.CHECKSUM 2026-02-13 04:43:42.111715 | orchestrator | 2026-02-13 04:43:42 | INFO  | checksum: 9fdfa3657e1c44e6efc4ff986d191c1016377896c5e1a1a69a9ed51571a127f4 2026-02-13 04:43:42.192178 | orchestrator | 2026-02-13 04:43:42 | INFO  | It takes a moment until task 2fcf48eb-6a8e-438e-8e27-777a5f9fad92 (image-manager) has been started and output is visible here. 2026-02-13 04:44:55.198885 | orchestrator | 2026-02-13 04:43:44 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-13' 2026-02-13 04:44:55.199022 | orchestrator | 2026-02-13 04:43:44 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260213.qcow2: 200 2026-02-13 04:44:55.199040 | orchestrator | 2026-02-13 04:43:44 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-13 2026-02-13 04:44:55.199053 | orchestrator | 2026-02-13 04:43:44 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260213.qcow2 2026-02-13 04:44:55.199065 | orchestrator | 2026-02-13 04:43:45 | INFO  | Waiting for image to leave queued state... 2026-02-13 04:44:55.199077 | orchestrator | 2026-02-13 04:43:48 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199089 | orchestrator | 2026-02-13 04:43:58 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199100 | orchestrator | 2026-02-13 04:44:08 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199111 | orchestrator | 2026-02-13 04:44:18 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199125 | orchestrator | 2026-02-13 04:44:28 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199137 | orchestrator | 2026-02-13 04:44:38 | INFO  | Waiting for import to complete... 2026-02-13 04:44:55.199148 | orchestrator | 2026-02-13 04:44:49 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-13' successfully completed, reloading images 2026-02-13 04:44:55.199160 | orchestrator | 2026-02-13 04:44:49 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-13' 2026-02-13 04:44:55.199195 | orchestrator | 2026-02-13 04:44:49 | INFO  | Setting internal_version = 2026-02-13 2026-02-13 04:44:55.199206 | orchestrator | 2026-02-13 04:44:49 | INFO  | Setting image_original_user = ubuntu 2026-02-13 04:44:55.199289 | orchestrator | 2026-02-13 04:44:49 | INFO  | Adding tag amphora 2026-02-13 04:44:55.199303 | orchestrator | 2026-02-13 04:44:49 | INFO  | Adding tag os:ubuntu 2026-02-13 04:44:55.199314 | orchestrator | 2026-02-13 04:44:50 | INFO  | Setting property architecture: x86_64 2026-02-13 04:44:55.199326 | orchestrator | 2026-02-13 04:44:50 | INFO  | Setting property hw_disk_bus: scsi 2026-02-13 04:44:55.199337 | orchestrator | 2026-02-13 04:44:50 | INFO  | Setting property hw_rng_model: virtio 2026-02-13 04:44:55.199348 | orchestrator | 2026-02-13 04:44:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-13 04:44:55.199359 | orchestrator | 2026-02-13 04:44:51 | INFO  | Setting property hw_watchdog_action: reset 2026-02-13 04:44:55.199370 | orchestrator | 2026-02-13 04:44:51 | INFO  | Setting property hypervisor_type: qemu 2026-02-13 04:44:55.199381 | orchestrator | 2026-02-13 04:44:51 | INFO  | Setting property os_distro: ubuntu 2026-02-13 04:44:55.199392 | orchestrator | 2026-02-13 04:44:51 | INFO  | Setting property replace_frequency: quarterly 2026-02-13 04:44:55.199404 | orchestrator | 2026-02-13 04:44:52 | INFO  | Setting property uuid_validity: last-1 2026-02-13 04:44:55.199416 | orchestrator | 2026-02-13 04:44:52 | INFO  | Setting property provided_until: none 2026-02-13 04:44:55.199430 | orchestrator | 2026-02-13 04:44:52 | INFO  | Setting property os_purpose: network 2026-02-13 04:44:55.199458 | orchestrator | 2026-02-13 04:44:52 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-13 04:44:55.199472 | orchestrator | 2026-02-13 04:44:53 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-13 04:44:55.199485 | orchestrator | 2026-02-13 04:44:53 | INFO  | Setting property internal_version: 2026-02-13 2026-02-13 04:44:55.199498 | orchestrator | 2026-02-13 04:44:53 | INFO  | Setting property image_original_user: ubuntu 2026-02-13 04:44:55.199510 | orchestrator | 2026-02-13 04:44:53 | INFO  | Setting property os_version: 2026-02-13 2026-02-13 04:44:55.199523 | orchestrator | 2026-02-13 04:44:54 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260213.qcow2 2026-02-13 04:44:55.199536 | orchestrator | 2026-02-13 04:44:54 | INFO  | Setting property image_build_date: 2026-02-13 2026-02-13 04:44:55.199548 | orchestrator | 2026-02-13 04:44:54 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-13' 2026-02-13 04:44:55.199561 | orchestrator | 2026-02-13 04:44:54 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-13' 2026-02-13 04:44:55.199594 | orchestrator | 2026-02-13 04:44:55 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-13 04:44:55.199607 | orchestrator | 2026-02-13 04:44:55 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-13 04:44:55.199621 | orchestrator | 2026-02-13 04:44:55 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-13 04:44:55.199634 | orchestrator | 2026-02-13 04:44:55 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-13 04:44:55.624689 | orchestrator | ok: Runtime: 0:03:09.530014 2026-02-13 04:44:55.643623 | 2026-02-13 04:44:55.643802 | TASK [Run checks] 2026-02-13 04:44:56.424827 | orchestrator | + set -e 2026-02-13 04:44:56.425012 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:44:56.425029 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:44:56.425044 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:44:56.425054 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:44:56.425064 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:44:56.425075 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-13 04:44:56.425485 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:44:56.429860 | orchestrator | 2026-02-13 04:44:56.429921 | orchestrator | # CHECK 2026-02-13 04:44:56.429928 | orchestrator | 2026-02-13 04:44:56.429934 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:44:56.429945 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:44:56.429951 | orchestrator | + echo 2026-02-13 04:44:56.429957 | orchestrator | + echo '# CHECK' 2026-02-13 04:44:56.429963 | orchestrator | + echo 2026-02-13 04:44:56.429972 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-13 04:44:56.430916 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-13 04:44:56.490435 | orchestrator | 2026-02-13 04:44:56.490547 | orchestrator | ## Containers @ testbed-manager 2026-02-13 04:44:56.490569 | orchestrator | 2026-02-13 04:44:56.490590 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-13 04:44:56.490610 | orchestrator | + echo 2026-02-13 04:44:56.490627 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-13 04:44:56.490645 | orchestrator | + echo 2026-02-13 04:44:56.490665 | orchestrator | + osism container testbed-manager ps 2026-02-13 04:44:58.522805 | orchestrator | 2026-02-13 04:44:58 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-13 04:44:58.922262 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-13 04:44:58.922424 | orchestrator | bc4e6a4b1e3c registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-13 04:44:58.922452 | orchestrator | 2505c0761e99 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-13 04:44:58.922465 | orchestrator | 921bf2ebad91 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-13 04:44:58.922477 | orchestrator | 1a50ee90330d registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-13 04:44:58.922489 | orchestrator | 5db1ed39db30 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-13 04:44:58.922505 | orchestrator | 6695df075481 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 58 minutes ago Up 57 minutes cephclient 2026-02-13 04:44:58.922517 | orchestrator | 7414d1ccb278 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-13 04:44:58.922529 | orchestrator | 35aac9779600 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-13 04:44:58.922583 | orchestrator | 8ab0bf650c57 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-13 04:44:58.922596 | orchestrator | 98b6bc9437ae registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-13 04:44:58.922608 | orchestrator | 53b168145366 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-13 04:44:58.922619 | orchestrator | 5640262424eb registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-13 04:44:58.922631 | orchestrator | 1661af12c9d1 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-13 04:44:58.922643 | orchestrator | 0076251e6431 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-13 04:44:58.922676 | orchestrator | 865863ab9446 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-13 04:44:58.922697 | orchestrator | 93f5752ed43a registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-13 04:44:58.922709 | orchestrator | f12412468239 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-13 04:44:58.922720 | orchestrator | 7f1de5d90246 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-13 04:44:58.922732 | orchestrator | fbfc003ebde4 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-13 04:44:58.922743 | orchestrator | bd4d8434cb72 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-13 04:44:58.922754 | orchestrator | 5b867c6ba9a3 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-13 04:44:58.922766 | orchestrator | 3d75a723625b registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-13 04:44:58.922785 | orchestrator | 6a68ad294dce registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-13 04:44:58.922797 | orchestrator | d78c3dd6231d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-13 04:44:58.922808 | orchestrator | 024d1965d9cd registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-13 04:44:58.922819 | orchestrator | b6af5572a30c registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-13 04:44:58.922830 | orchestrator | 855fde435357 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-13 04:44:58.922841 | orchestrator | f9f1dfb070f6 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-13 04:44:58.922853 | orchestrator | a2fd157669c6 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-13 04:44:58.922870 | orchestrator | 4e6652a04806 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-13 04:44:59.259715 | orchestrator | 2026-02-13 04:44:59.259831 | orchestrator | ## Images @ testbed-manager 2026-02-13 04:44:59.259849 | orchestrator | 2026-02-13 04:44:59.259861 | orchestrator | + echo 2026-02-13 04:44:59.259873 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-13 04:44:59.259931 | orchestrator | + echo 2026-02-13 04:44:59.259947 | orchestrator | + osism container testbed-manager images 2026-02-13 04:45:01.623952 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-13 04:45:01.624114 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 a50fa9089065 25 hours ago 239MB 2026-02-13 04:45:01.624133 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 weeks ago 41.4MB 2026-02-13 04:45:01.624145 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-13 04:45:01.624156 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-13 04:45:01.624168 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-13 04:45:01.624179 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-13 04:45:01.624190 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-13 04:45:01.624203 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-13 04:45:01.624214 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-13 04:45:01.624298 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-13 04:45:01.624312 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-13 04:45:01.624323 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-13 04:45:01.624334 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-13 04:45:01.624345 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-13 04:45:01.624356 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-13 04:45:01.624367 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-13 04:45:01.624378 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-13 04:45:01.624389 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-13 04:45:01.624401 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-13 04:45:01.624412 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-13 04:45:01.624423 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-13 04:45:01.624434 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-13 04:45:01.624444 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-13 04:45:01.624455 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-13 04:45:01.624467 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-13 04:45:01.926469 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-13 04:45:01.927083 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-13 04:45:01.980159 | orchestrator | 2026-02-13 04:45:01.980289 | orchestrator | ## Containers @ testbed-node-0 2026-02-13 04:45:01.980305 | orchestrator | 2026-02-13 04:45:01.980315 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-13 04:45:01.980324 | orchestrator | + echo 2026-02-13 04:45:01.980334 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-13 04:45:01.980344 | orchestrator | + echo 2026-02-13 04:45:01.980353 | orchestrator | + osism container testbed-node-0 ps 2026-02-13 04:45:04.463123 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-13 04:45:04.463341 | orchestrator | d5e2a2dbfb30 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-13 04:45:04.463383 | orchestrator | e6748c967bb9 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-13 04:45:04.463395 | orchestrator | 2993897a1fb7 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2026-02-13 04:45:04.463405 | orchestrator | c4a7e30d33cf registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-13 04:45:04.463462 | orchestrator | bd05803df38b registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-13 04:45:04.463474 | orchestrator | cd95ee12cfe9 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-13 04:45:04.463490 | orchestrator | 65f833eae6d3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-13 04:45:04.463500 | orchestrator | 78eb7361cffb registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-13 04:45:04.463510 | orchestrator | 5125433c7251 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-13 04:45:04.463521 | orchestrator | 47d5404b2bb5 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-13 04:45:04.463530 | orchestrator | dda477b7e8ba registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-13 04:45:04.463540 | orchestrator | fe98dc02bece registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-13 04:45:04.463550 | orchestrator | faf632c1d449 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-13 04:45:04.463559 | orchestrator | 0aa645ab2666 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-13 04:45:04.463569 | orchestrator | 03d1cb4e6133 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_evaluator 2026-02-13 04:45:04.463578 | orchestrator | 1d245d0bf78b registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-13 04:45:04.463588 | orchestrator | 989b82b4497d registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-13 04:45:04.463597 | orchestrator | 51ca1c2c4154 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-13 04:45:04.463607 | orchestrator | b3c79df51241 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-13 04:45:04.463672 | orchestrator | c0cb1825f684 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-13 04:45:04.463694 | orchestrator | 537d2c25b3c4 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-13 04:45:04.463711 | orchestrator | 8ced3ad8276f registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-13 04:45:04.463755 | orchestrator | 5818b3da03e2 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-13 04:45:04.463766 | orchestrator | e33175e528cf registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-13 04:45:04.463775 | orchestrator | d75c2aaaec1a registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-13 04:45:04.463822 | orchestrator | c34018d6e733 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-13 04:45:04.463843 | orchestrator | 9292bb28f007 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-13 04:45:04.463853 | orchestrator | 80fd9a858572 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-13 04:45:04.463863 | orchestrator | 4445a6b71d93 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-13 04:45:04.463873 | orchestrator | a20818344569 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-13 04:45:04.463883 | orchestrator | c8db44842f6e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-13 04:45:04.463893 | orchestrator | 6752e0988b1f registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-13 04:45:04.463903 | orchestrator | df656138849b registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-13 04:45:04.463913 | orchestrator | 5b7bff802888 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-13 04:45:04.463923 | orchestrator | f8eb6acaeed2 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-13 04:45:04.463932 | orchestrator | 1ce2c458f37b registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-13 04:45:04.463942 | orchestrator | 2ec8fac80b58 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-13 04:45:04.463952 | orchestrator | e615263f3847 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-13 04:45:04.463962 | orchestrator | c332e3667351 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-13 04:45:04.463984 | orchestrator | d07ef0847d0b registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-13 04:45:04.464001 | orchestrator | 57a5434d58b7 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-13 04:45:04.464012 | orchestrator | d898fdc5673f registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-13 04:45:04.464027 | orchestrator | b3c881085c63 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-13 04:45:04.464037 | orchestrator | f0063169832d registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-13 04:45:04.464046 | orchestrator | aff2472474bd registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-13 04:45:04.464056 | orchestrator | e3e693b7a886 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-13 04:45:04.464065 | orchestrator | 439c467976ee registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 53 minutes (healthy) keystone 2026-02-13 04:45:04.464075 | orchestrator | 5135b463dc80 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-13 04:45:04.464085 | orchestrator | 4602c4c1bbaf registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-13 04:45:04.464095 | orchestrator | 5cf6a4c0add9 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-0 2026-02-13 04:45:04.464104 | orchestrator | 8636d4add711 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-13 04:45:04.464114 | orchestrator | 9a39aafafb69 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-13 04:45:04.464123 | orchestrator | f12d99139820 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-13 04:45:04.464133 | orchestrator | f30a44b7856b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-13 04:45:04.464142 | orchestrator | 986126b484a1 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-13 04:45:04.464152 | orchestrator | f76b30c1d694 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-13 04:45:04.464178 | orchestrator | 47e87182a8ca registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-13 04:45:04.464188 | orchestrator | 7aacb54e5d59 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-13 04:45:04.464204 | orchestrator | bb58e0a1060d registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-13 04:45:04.464272 | orchestrator | 53c314601298 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-13 04:45:04.464286 | orchestrator | 5963b522d76a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-13 04:45:04.464296 | orchestrator | c66f7631d529 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-13 04:45:04.464306 | orchestrator | 85e48cd88465 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-13 04:45:04.464316 | orchestrator | 49e79fb1bb9c registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-13 04:45:04.464325 | orchestrator | 8da2de055c89 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-13 04:45:04.464335 | orchestrator | 0575d2134989 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-13 04:45:04.464345 | orchestrator | b67444ec4380 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-13 04:45:04.464354 | orchestrator | f842e71ebde7 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-13 04:45:04.464364 | orchestrator | a7d86aef1851 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-13 04:45:04.464374 | orchestrator | 7f98b00c5a02 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-13 04:45:04.464383 | orchestrator | f8f8ea1dd4e5 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-13 04:45:04.777539 | orchestrator | 2026-02-13 04:45:04.777647 | orchestrator | ## Images @ testbed-node-0 2026-02-13 04:45:04.777665 | orchestrator | 2026-02-13 04:45:04.777677 | orchestrator | + echo 2026-02-13 04:45:04.777702 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-13 04:45:04.777715 | orchestrator | + echo 2026-02-13 04:45:04.777727 | orchestrator | + osism container testbed-node-0 images 2026-02-13 04:45:07.243081 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-13 04:45:07.243340 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-13 04:45:07.243374 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-13 04:45:07.243389 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-13 04:45:07.243402 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-13 04:45:07.243459 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-13 04:45:07.243475 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-13 04:45:07.243489 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-13 04:45:07.243504 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-13 04:45:07.243519 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-13 04:45:07.243534 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-13 04:45:07.243548 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-13 04:45:07.243562 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-13 04:45:07.243572 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-13 04:45:07.243581 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-13 04:45:07.243590 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-13 04:45:07.243600 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-13 04:45:07.243609 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-13 04:45:07.243618 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-13 04:45:07.243627 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-13 04:45:07.243636 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-13 04:45:07.243645 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-13 04:45:07.243654 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-13 04:45:07.243663 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-13 04:45:07.243673 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-13 04:45:07.243681 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-13 04:45:07.243689 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-13 04:45:07.243697 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-13 04:45:07.243712 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-13 04:45:07.243720 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-13 04:45:07.243728 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-13 04:45:07.243744 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-13 04:45:07.243770 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-13 04:45:07.243779 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-13 04:45:07.243786 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-13 04:45:07.243794 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-13 04:45:07.243802 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-13 04:45:07.243809 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-13 04:45:07.243817 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-13 04:45:07.243825 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-13 04:45:07.243833 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-13 04:45:07.243840 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-13 04:45:07.243848 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-13 04:45:07.243856 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-13 04:45:07.243863 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-13 04:45:07.243871 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-13 04:45:07.243879 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-13 04:45:07.243887 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-13 04:45:07.243895 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-13 04:45:07.243903 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-13 04:45:07.243911 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-13 04:45:07.243918 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-13 04:45:07.243926 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-13 04:45:07.243934 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-13 04:45:07.243941 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-13 04:45:07.243949 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-13 04:45:07.243957 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-13 04:45:07.243970 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-13 04:45:07.243978 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-13 04:45:07.243990 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-13 04:45:07.243998 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-13 04:45:07.244006 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-13 04:45:07.244014 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-13 04:45:07.244021 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-13 04:45:07.244036 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-13 04:45:07.244044 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-13 04:45:07.244051 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-13 04:45:07.244059 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-13 04:45:07.244067 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-13 04:45:07.244075 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-13 04:45:07.567137 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-13 04:45:07.567468 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-13 04:45:07.617166 | orchestrator | 2026-02-13 04:45:07.617323 | orchestrator | ## Containers @ testbed-node-1 2026-02-13 04:45:07.617345 | orchestrator | 2026-02-13 04:45:07.617358 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-13 04:45:07.617370 | orchestrator | + echo 2026-02-13 04:45:07.617381 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-13 04:45:07.617394 | orchestrator | + echo 2026-02-13 04:45:07.617406 | orchestrator | + osism container testbed-node-1 ps 2026-02-13 04:45:10.089599 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-13 04:45:10.089700 | orchestrator | 1ad9e9ceceba registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-13 04:45:10.089717 | orchestrator | 50c57c7f037a registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-13 04:45:10.089729 | orchestrator | f717c7a283a5 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-13 04:45:10.089741 | orchestrator | 0a1fe96b50a7 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-13 04:45:10.089754 | orchestrator | 60fc061dd38a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-13 04:45:10.089765 | orchestrator | 34aea3678b49 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_memcached_exporter 2026-02-13 04:45:10.089796 | orchestrator | 1a76bcfa1b69 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-13 04:45:10.089808 | orchestrator | 4108b15e7d94 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-13 04:45:10.089819 | orchestrator | 05a3f7503059 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-13 04:45:10.089830 | orchestrator | 6754547f2751 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-13 04:45:10.089841 | orchestrator | 6cc9a5095306 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-13 04:45:10.089852 | orchestrator | 172d578cfa41 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_api 2026-02-13 04:45:10.089873 | orchestrator | f0171fc2447e registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-13 04:45:10.089884 | orchestrator | 8f5bd137ed12 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_listener 2026-02-13 04:45:10.089895 | orchestrator | 6ddf825069c9 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-13 04:45:10.089906 | orchestrator | f2c3b795139b registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-13 04:45:10.089917 | orchestrator | 9b1bc3bb4b25 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-13 04:45:10.089928 | orchestrator | 8562cc812b19 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-13 04:45:10.089939 | orchestrator | a0a374a20524 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-13 04:45:10.089967 | orchestrator | e3275037635d registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-13 04:45:10.089978 | orchestrator | 931ec394f3d6 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-13 04:45:10.089989 | orchestrator | a42464f99ff5 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-13 04:45:10.090000 | orchestrator | d85e1ab3974d registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-13 04:45:10.090010 | orchestrator | dc479fcc6cab registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-13 04:45:10.090104 | orchestrator | d9630638d44b registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-13 04:45:10.090118 | orchestrator | de4835ba3f77 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-13 04:45:10.090131 | orchestrator | addb3424ec57 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-13 04:45:10.090144 | orchestrator | 74d6768ea31a registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-13 04:45:10.090157 | orchestrator | 0cc21cff28c0 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-13 04:45:10.090170 | orchestrator | 7479cb2eea17 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-13 04:45:10.090183 | orchestrator | dc5ab6d38e01 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-13 04:45:10.090196 | orchestrator | decb9daba3e2 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) barbican_api 2026-02-13 04:45:10.090210 | orchestrator | af6bb1476394 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-13 04:45:10.090261 | orchestrator | 726aee16182b registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-13 04:45:10.090281 | orchestrator | 4e6d85875b1a registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-13 04:45:10.090300 | orchestrator | 58c7f5ae2bea registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-13 04:45:10.090333 | orchestrator | 3be80e7936c9 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-13 04:45:10.090360 | orchestrator | 9acc5b89ea63 registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-13 04:45:10.090378 | orchestrator | 97ec1f1414b2 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-13 04:45:10.090409 | orchestrator | 035a5abc8447 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-13 04:45:10.090429 | orchestrator | 0102e7117023 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-13 04:45:10.090461 | orchestrator | d86ccc19e6e4 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-13 04:45:10.090480 | orchestrator | e8e62b84ac96 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-13 04:45:10.090498 | orchestrator | 4208f668defb registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-13 04:45:10.090516 | orchestrator | bec9af9a34c1 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-13 04:45:10.090527 | orchestrator | ddbf777fb7dd registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 51 minutes ago Up 51 minutes (healthy) placement_api 2026-02-13 04:45:10.090537 | orchestrator | d0769b1baf45 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-13 04:45:10.090548 | orchestrator | f4ed5823ef1f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-13 04:45:10.090559 | orchestrator | bb24f5114c13 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-13 04:45:10.090569 | orchestrator | 1ac9d2e4bdcb registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-1 2026-02-13 04:45:10.090581 | orchestrator | b4605bf5f77b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-13 04:45:10.090591 | orchestrator | b8f8955ec790 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-13 04:45:10.090602 | orchestrator | 7c1646950228 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-13 04:45:10.090612 | orchestrator | f262d4452bf3 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-13 04:45:10.090623 | orchestrator | b70c407c0622 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-13 04:45:10.090634 | orchestrator | 61a00486ca75 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-13 04:45:10.090644 | orchestrator | a46065b2c518 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-13 04:45:10.090655 | orchestrator | 6baa981ddae9 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-13 04:45:10.090665 | orchestrator | 443e153078ab registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-13 04:45:10.090690 | orchestrator | d9a9e4ba96a7 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-13 04:45:10.090701 | orchestrator | 36308da67158 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-13 04:45:10.090712 | orchestrator | ebcbd38b60cd registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-13 04:45:10.090722 | orchestrator | 4bf44bf4620f registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-13 04:45:10.090733 | orchestrator | 49c3440ec4d1 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-13 04:45:10.090752 | orchestrator | bf3b5adfd2cb registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-13 04:45:10.090771 | orchestrator | 7581fa541208 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-13 04:45:10.090788 | orchestrator | f3cc664dbc16 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-13 04:45:10.090805 | orchestrator | 1479cb7e17ed registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-13 04:45:10.090823 | orchestrator | d0010dcbd2c4 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-13 04:45:10.090849 | orchestrator | 6e47b7dc9953 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-13 04:45:10.090869 | orchestrator | a1daa8bafa00 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-13 04:45:10.399384 | orchestrator | 2026-02-13 04:45:10.399488 | orchestrator | ## Images @ testbed-node-1 2026-02-13 04:45:10.399505 | orchestrator | 2026-02-13 04:45:10.399517 | orchestrator | + echo 2026-02-13 04:45:10.399529 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-13 04:45:10.399541 | orchestrator | + echo 2026-02-13 04:45:10.399553 | orchestrator | + osism container testbed-node-1 images 2026-02-13 04:45:12.808724 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-13 04:45:12.809148 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-13 04:45:12.809176 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-13 04:45:12.809187 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-13 04:45:12.809197 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-13 04:45:12.809206 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-13 04:45:12.809215 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-13 04:45:12.809280 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-13 04:45:12.809291 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-13 04:45:12.809300 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-13 04:45:12.809802 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-13 04:45:12.809821 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-13 04:45:12.809829 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-13 04:45:12.809838 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-13 04:45:12.809847 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-13 04:45:12.809855 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-13 04:45:12.809864 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-13 04:45:12.809872 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-13 04:45:12.809881 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-13 04:45:12.809889 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-13 04:45:12.810281 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-13 04:45:12.810306 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-13 04:45:12.810322 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-13 04:45:12.810336 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-13 04:45:12.810351 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-13 04:45:12.810365 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-13 04:45:12.810374 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-13 04:45:12.810383 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-13 04:45:12.810882 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-13 04:45:12.810901 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-13 04:45:12.810910 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-13 04:45:12.810919 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-13 04:45:12.810928 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-13 04:45:12.810950 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-13 04:45:12.810958 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-13 04:45:12.810967 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-13 04:45:12.810975 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-13 04:45:12.810984 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-13 04:45:12.811004 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-13 04:45:12.811013 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-13 04:45:12.811022 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-13 04:45:12.811031 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-13 04:45:12.811039 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-13 04:45:12.811048 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-13 04:45:12.811056 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-13 04:45:12.811065 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-13 04:45:12.811073 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-13 04:45:12.811082 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-13 04:45:12.811769 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-13 04:45:12.811875 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-13 04:45:12.811898 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-13 04:45:12.811916 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-13 04:45:12.811935 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-13 04:45:12.811954 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-13 04:45:12.811972 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-13 04:45:12.811990 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-13 04:45:12.812009 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-13 04:45:12.812028 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-13 04:45:12.812046 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-13 04:45:12.812064 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-13 04:45:12.812110 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-13 04:45:12.812129 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-13 04:45:12.812147 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-13 04:45:12.812165 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-13 04:45:12.812183 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-13 04:45:12.812201 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-13 04:45:12.812220 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-13 04:45:12.812288 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-13 04:45:12.812308 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-13 04:45:12.812327 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-13 04:45:13.125859 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-13 04:45:13.125942 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-13 04:45:13.168615 | orchestrator | 2026-02-13 04:45:13.168697 | orchestrator | ## Containers @ testbed-node-2 2026-02-13 04:45:13.168706 | orchestrator | 2026-02-13 04:45:13.168712 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-13 04:45:13.168719 | orchestrator | + echo 2026-02-13 04:45:13.168726 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-13 04:45:13.168733 | orchestrator | + echo 2026-02-13 04:45:13.168739 | orchestrator | + osism container testbed-node-2 ps 2026-02-13 04:45:15.606701 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-13 04:45:15.606772 | orchestrator | ef240c2455ef registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-13 04:45:15.606780 | orchestrator | d1a8632f224f registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_api 2026-02-13 04:45:15.606784 | orchestrator | 7725e644d576 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-13 04:45:15.606788 | orchestrator | 2031bde090ad registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-13 04:45:15.606794 | orchestrator | eb30f3e65697 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-13 04:45:15.606797 | orchestrator | 7aa173bd37a6 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-13 04:45:15.606803 | orchestrator | 3d1bf0e575f3 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-13 04:45:15.606807 | orchestrator | 1a7640ef0374 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-13 04:45:15.606825 | orchestrator | 1a33796556b1 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-13 04:45:15.606829 | orchestrator | 61a08bdaf23c registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-13 04:45:15.606833 | orchestrator | 18b09482fd61 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_data 2026-02-13 04:45:15.606837 | orchestrator | 05c01af67dac registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-13 04:45:15.606855 | orchestrator | 3ca48b296648 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) aodh_notifier 2026-02-13 04:45:15.606859 | orchestrator | 7ceef430a38f registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-13 04:45:15.606863 | orchestrator | c8ad103f75b8 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-13 04:45:15.606867 | orchestrator | bc0514994a26 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-13 04:45:15.606870 | orchestrator | d75faae8da50 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes ceilometer_central 2026-02-13 04:45:15.606874 | orchestrator | b0ed38fb8eec registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) ceilometer_notification 2026-02-13 04:45:15.606878 | orchestrator | 174c015d36ca registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-13 04:45:15.606891 | orchestrator | a7f92fb2c122 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-13 04:45:15.606895 | orchestrator | 38f841862bca registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_health_manager 2026-02-13 04:45:15.606899 | orchestrator | 948fe262c2d7 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes octavia_driver_agent 2026-02-13 04:45:15.606903 | orchestrator | 0de5732de142 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_api 2026-02-13 04:45:15.606906 | orchestrator | b1ec13b1e594 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-13 04:45:15.606910 | orchestrator | 61e746ccd6ca registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_mdns 2026-02-13 04:45:15.606920 | orchestrator | 81d0c736197c registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_producer 2026-02-13 04:45:15.606924 | orchestrator | bf8534f24d41 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_central 2026-02-13 04:45:15.606928 | orchestrator | 9581c28497e2 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_api 2026-02-13 04:45:15.606932 | orchestrator | 4bfff1c8aca7 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_backend_bind9 2026-02-13 04:45:15.606935 | orchestrator | fc844fc4e651 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_worker 2026-02-13 04:45:15.606939 | orchestrator | 8b07e088d589 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) barbican_keystone_listener 2026-02-13 04:45:15.606943 | orchestrator | 6081d7e3f7f0 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-13 04:45:15.606947 | orchestrator | 21aa22421ffa registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_backup 2026-02-13 04:45:15.606950 | orchestrator | 0e25131aa5fb registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) cinder_volume 2026-02-13 04:45:15.606954 | orchestrator | 8a179a87bfd7 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-13 04:45:15.606958 | orchestrator | 411478bbd022 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_api 2026-02-13 04:45:15.606961 | orchestrator | 1a1f9d139b63 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 33 minutes ago Up 33 minutes (healthy) glance_api 2026-02-13 04:45:15.606965 | orchestrator | fe3e557d222a registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_console 2026-02-13 04:45:15.606969 | orchestrator | f97617b37972 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 36 minutes ago Up 36 minutes (healthy) skyline_apiserver 2026-02-13 04:45:15.606976 | orchestrator | 3518f7b10caa registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) horizon 2026-02-13 04:45:15.606980 | orchestrator | ee2d78ce0c24 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_novncproxy 2026-02-13 04:45:15.606983 | orchestrator | 09ea666e1daf registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 41 minutes ago Up 41 minutes (healthy) nova_conductor 2026-02-13 04:45:15.606987 | orchestrator | d7611562171c registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_api 2026-02-13 04:45:15.606994 | orchestrator | f5eb1248b03f registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_scheduler 2026-02-13 04:45:15.606998 | orchestrator | 0b039319a4ec registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 48 minutes ago Up 48 minutes (healthy) neutron_server 2026-02-13 04:45:15.607002 | orchestrator | dfb9902d067e registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 52 minutes ago Up 52 minutes (healthy) placement_api 2026-02-13 04:45:15.607005 | orchestrator | ce8d76b796ac registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone 2026-02-13 04:45:15.607009 | orchestrator | 2bddb1f60f4a registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_fernet 2026-02-13 04:45:15.607013 | orchestrator | 6e9d2c3daad4 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 54 minutes ago Up 54 minutes (healthy) keystone_ssh 2026-02-13 04:45:15.607016 | orchestrator | 99e869df98b3 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 56 minutes ago Up 56 minutes ceph-mgr-testbed-node-2 2026-02-13 04:45:15.607020 | orchestrator | 01e49f2d51d2 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-13 04:45:15.607026 | orchestrator | 30f78d02966b registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-13 04:45:15.607030 | orchestrator | 65b4f75fe0e8 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-13 04:45:15.607037 | orchestrator | 9bc722a154bc registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-13 04:45:15.607040 | orchestrator | 78b87a0878da registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-13 04:45:15.607044 | orchestrator | b25116d2bb9f registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-13 04:45:15.607048 | orchestrator | 45ce24cddff7 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-13 04:45:15.607052 | orchestrator | a417bf70263b registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-13 04:45:15.607191 | orchestrator | de81212fa2af registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-13 04:45:15.607200 | orchestrator | 297de3d377b4 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-13 04:45:15.607204 | orchestrator | 046efaec08c8 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-13 04:45:15.607212 | orchestrator | 249204fc7a3c registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-13 04:45:15.607216 | orchestrator | d21e502fcef0 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-13 04:45:15.607219 | orchestrator | 09a233855dc9 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-13 04:45:15.607223 | orchestrator | f6e6f3e81e79 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch 2026-02-13 04:45:15.607249 | orchestrator | b45f7b1eedd0 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" About an hour ago Up About an hour keepalived 2026-02-13 04:45:15.607262 | orchestrator | 58b3099e1de0 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) proxysql 2026-02-13 04:45:15.607269 | orchestrator | 1ca48266176e registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) haproxy 2026-02-13 04:45:15.607281 | orchestrator | 9e32a932ce14 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-13 04:45:15.607286 | orchestrator | 6e8a57a8635d registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-13 04:45:15.607292 | orchestrator | 15b70dde3094 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-13 04:45:15.924136 | orchestrator | 2026-02-13 04:45:15.924316 | orchestrator | ## Images @ testbed-node-2 2026-02-13 04:45:15.924336 | orchestrator | 2026-02-13 04:45:15.924347 | orchestrator | + echo 2026-02-13 04:45:15.924357 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-13 04:45:15.924368 | orchestrator | + echo 2026-02-13 04:45:15.924378 | orchestrator | + osism container testbed-node-2 images 2026-02-13 04:45:18.404921 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-13 04:45:18.405076 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-13 04:45:18.405105 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-13 04:45:18.405190 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-13 04:45:18.405294 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-13 04:45:18.405320 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-13 04:45:18.405339 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-13 04:45:18.405351 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-13 04:45:18.405363 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-13 04:45:18.405395 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-13 04:45:18.405407 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-13 04:45:18.405424 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-13 04:45:18.405437 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-13 04:45:18.405450 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-13 04:45:18.405463 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-13 04:45:18.405474 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-13 04:45:18.405485 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-13 04:45:18.405496 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-13 04:45:18.405506 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-13 04:45:18.405517 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-13 04:45:18.405528 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-13 04:45:18.405538 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-13 04:45:18.405549 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-13 04:45:18.405560 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-13 04:45:18.405571 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-13 04:45:18.405581 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-13 04:45:18.405592 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-13 04:45:18.405603 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-13 04:45:18.405614 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-13 04:45:18.405625 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-13 04:45:18.405635 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-13 04:45:18.405645 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-13 04:45:18.405676 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-13 04:45:18.405687 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-13 04:45:18.405698 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-13 04:45:18.405708 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-13 04:45:18.405727 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-13 04:45:18.405738 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-13 04:45:18.405749 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-13 04:45:18.405768 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-13 04:45:18.405779 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-13 04:45:18.405790 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-13 04:45:18.405799 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-13 04:45:18.405809 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-13 04:45:18.405820 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-13 04:45:18.405836 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-13 04:45:18.405851 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-13 04:45:18.405866 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-13 04:45:18.405882 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-13 04:45:18.405898 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-13 04:45:18.405913 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-13 04:45:18.405929 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-13 04:45:18.405943 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-13 04:45:18.405958 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-13 04:45:18.405973 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-13 04:45:18.405988 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-13 04:45:18.406003 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-13 04:45:18.406073 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-13 04:45:18.406092 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-13 04:45:18.406109 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-13 04:45:18.406125 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-13 04:45:18.406141 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-13 04:45:18.406169 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-13 04:45:18.406185 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-13 04:45:18.406214 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-13 04:45:18.406252 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-13 04:45:18.406269 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-13 04:45:18.406285 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-13 04:45:18.406309 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-13 04:45:18.406325 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-13 04:45:18.730445 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-13 04:45:18.735971 | orchestrator | + set -e 2026-02-13 04:45:18.736025 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 04:45:18.736033 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 04:45:18.736038 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 04:45:18.736043 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 04:45:18.736577 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 04:45:18.736604 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 04:45:18.736617 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 04:45:18.736628 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:45:18.736639 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:45:18.736649 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 04:45:18.736660 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 04:45:18.736667 | orchestrator | ++ export ARA=false 2026-02-13 04:45:18.736676 | orchestrator | ++ ARA=false 2026-02-13 04:45:18.736687 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 04:45:18.736697 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 04:45:18.736707 | orchestrator | ++ export TEMPEST=false 2026-02-13 04:45:18.736717 | orchestrator | ++ TEMPEST=false 2026-02-13 04:45:18.736726 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 04:45:18.736736 | orchestrator | ++ IS_ZUUL=true 2026-02-13 04:45:18.736747 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:45:18.736758 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:45:18.736769 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 04:45:18.736780 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 04:45:18.736790 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 04:45:18.736801 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 04:45:18.736812 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 04:45:18.736823 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 04:45:18.736830 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 04:45:18.736836 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 04:45:18.736843 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 04:45:18.736849 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-13 04:45:18.747192 | orchestrator | + set -e 2026-02-13 04:45:18.747333 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:45:18.747352 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:45:18.747366 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:45:18.748026 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:45:18.748058 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:45:18.749351 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-13 04:45:18.750223 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:45:18.757209 | orchestrator | 2026-02-13 04:45:18.757298 | orchestrator | # Ceph status 2026-02-13 04:45:18.757311 | orchestrator | 2026-02-13 04:45:18.757323 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:45:18.757335 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:45:18.757347 | orchestrator | + echo 2026-02-13 04:45:18.757358 | orchestrator | + echo '# Ceph status' 2026-02-13 04:45:18.757402 | orchestrator | + echo 2026-02-13 04:45:18.757414 | orchestrator | + ceph -s 2026-02-13 04:45:19.360101 | orchestrator | cluster: 2026-02-13 04:45:19.360206 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-13 04:45:19.360222 | orchestrator | health: HEALTH_OK 2026-02-13 04:45:19.360304 | orchestrator | 2026-02-13 04:45:19.360317 | orchestrator | services: 2026-02-13 04:45:19.360328 | orchestrator | mon: 3 daemons, quorum testbed-node-1,testbed-node-0,testbed-node-2 (age 68m) 2026-02-13 04:45:19.360341 | orchestrator | mgr: testbed-node-1(active, since 56m), standbys: testbed-node-2, testbed-node-0 2026-02-13 04:45:19.360353 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-13 04:45:19.360364 | orchestrator | osd: 6 osds: 6 up (since 64m), 6 in (since 65m) 2026-02-13 04:45:19.360376 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-13 04:45:19.360388 | orchestrator | 2026-02-13 04:45:19.360399 | orchestrator | data: 2026-02-13 04:45:19.360410 | orchestrator | volumes: 1/1 healthy 2026-02-13 04:45:19.360421 | orchestrator | pools: 14 pools, 401 pgs 2026-02-13 04:45:19.360432 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-13 04:45:19.360443 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-13 04:45:19.360455 | orchestrator | pgs: 401 active+clean 2026-02-13 04:45:19.360466 | orchestrator | 2026-02-13 04:45:19.411606 | orchestrator | 2026-02-13 04:45:19.411672 | orchestrator | # Ceph versions 2026-02-13 04:45:19.411677 | orchestrator | 2026-02-13 04:45:19.411682 | orchestrator | + echo 2026-02-13 04:45:19.411686 | orchestrator | + echo '# Ceph versions' 2026-02-13 04:45:19.411691 | orchestrator | + echo 2026-02-13 04:45:19.411695 | orchestrator | + ceph versions 2026-02-13 04:45:20.007809 | orchestrator | { 2026-02-13 04:45:20.007904 | orchestrator | "mon": { 2026-02-13 04:45:20.007918 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-13 04:45:20.007930 | orchestrator | }, 2026-02-13 04:45:20.007939 | orchestrator | "mgr": { 2026-02-13 04:45:20.007948 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-13 04:45:20.007957 | orchestrator | }, 2026-02-13 04:45:20.007967 | orchestrator | "osd": { 2026-02-13 04:45:20.007976 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-13 04:45:20.007985 | orchestrator | }, 2026-02-13 04:45:20.007993 | orchestrator | "mds": { 2026-02-13 04:45:20.007999 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-13 04:45:20.008004 | orchestrator | }, 2026-02-13 04:45:20.008009 | orchestrator | "rgw": { 2026-02-13 04:45:20.008015 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-13 04:45:20.008020 | orchestrator | }, 2026-02-13 04:45:20.008026 | orchestrator | "overall": { 2026-02-13 04:45:20.008032 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-13 04:45:20.008038 | orchestrator | } 2026-02-13 04:45:20.008043 | orchestrator | } 2026-02-13 04:45:20.050386 | orchestrator | 2026-02-13 04:45:20.050453 | orchestrator | # Ceph OSD tree 2026-02-13 04:45:20.050459 | orchestrator | 2026-02-13 04:45:20.050464 | orchestrator | + echo 2026-02-13 04:45:20.050470 | orchestrator | + echo '# Ceph OSD tree' 2026-02-13 04:45:20.050475 | orchestrator | + echo 2026-02-13 04:45:20.050480 | orchestrator | + ceph osd df tree 2026-02-13 04:45:20.523671 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-13 04:45:20.523765 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 390 MiB 113 GiB 5.88 1.00 - root default 2026-02-13 04:45:20.523778 | orchestrator | -3 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 123 MiB 38 GiB 5.87 1.00 - host testbed-node-3 2026-02-13 04:45:20.523783 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 62 MiB 19 GiB 7.16 1.22 201 up osd.0 2026-02-13 04:45:20.523788 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 936 MiB 875 MiB 1 KiB 62 MiB 19 GiB 4.58 0.78 189 up osd.5 2026-02-13 04:45:20.523793 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-4 2026-02-13 04:45:20.523797 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 62 MiB 19 GiB 6.49 1.10 184 up osd.1 2026-02-13 04:45:20.523819 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 66 MiB 19 GiB 5.26 0.89 204 up osd.3 2026-02-13 04:45:20.523824 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-02-13 04:45:20.523829 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 66 MiB 18 GiB 7.77 1.32 203 up osd.2 2026-02-13 04:45:20.523834 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 828 MiB 755 MiB 1 KiB 74 MiB 19 GiB 4.05 0.69 189 up osd.4 2026-02-13 04:45:20.523838 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 390 MiB 113 GiB 5.88 2026-02-13 04:45:20.523843 | orchestrator | MIN/MAX VAR: 0.69/1.32 STDDEV: 1.35 2026-02-13 04:45:20.565019 | orchestrator | 2026-02-13 04:45:20.565105 | orchestrator | # Ceph monitor status 2026-02-13 04:45:20.565118 | orchestrator | 2026-02-13 04:45:20.565128 | orchestrator | + echo 2026-02-13 04:45:20.565138 | orchestrator | + echo '# Ceph monitor status' 2026-02-13 04:45:20.565147 | orchestrator | + echo 2026-02-13 04:45:20.565156 | orchestrator | + ceph mon stat 2026-02-13 04:45:21.186609 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.8:3300/0,v1:192.168.16.8:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-1, quorum 0,1,2 testbed-node-1,testbed-node-0,testbed-node-2 2026-02-13 04:45:21.240324 | orchestrator | 2026-02-13 04:45:21.240452 | orchestrator | # Ceph quorum status 2026-02-13 04:45:21.240469 | orchestrator | 2026-02-13 04:45:21.240479 | orchestrator | + echo 2026-02-13 04:45:21.240490 | orchestrator | + echo '# Ceph quorum status' 2026-02-13 04:45:21.240500 | orchestrator | + echo 2026-02-13 04:45:21.240509 | orchestrator | + ceph quorum_status 2026-02-13 04:45:21.240584 | orchestrator | + jq 2026-02-13 04:45:21.890715 | orchestrator | { 2026-02-13 04:45:21.890814 | orchestrator | "election_epoch": 8, 2026-02-13 04:45:21.890830 | orchestrator | "quorum": [ 2026-02-13 04:45:21.890842 | orchestrator | 0, 2026-02-13 04:45:21.890853 | orchestrator | 1, 2026-02-13 04:45:21.890864 | orchestrator | 2 2026-02-13 04:45:21.890874 | orchestrator | ], 2026-02-13 04:45:21.890886 | orchestrator | "quorum_names": [ 2026-02-13 04:45:21.890897 | orchestrator | "testbed-node-1", 2026-02-13 04:45:21.890907 | orchestrator | "testbed-node-0", 2026-02-13 04:45:21.890918 | orchestrator | "testbed-node-2" 2026-02-13 04:45:21.890929 | orchestrator | ], 2026-02-13 04:45:21.890940 | orchestrator | "quorum_leader_name": "testbed-node-1", 2026-02-13 04:45:21.890952 | orchestrator | "quorum_age": 4117, 2026-02-13 04:45:21.890963 | orchestrator | "features": { 2026-02-13 04:45:21.890974 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-13 04:45:21.890985 | orchestrator | "quorum_mon": [ 2026-02-13 04:45:21.890995 | orchestrator | "kraken", 2026-02-13 04:45:21.891006 | orchestrator | "luminous", 2026-02-13 04:45:21.891017 | orchestrator | "mimic", 2026-02-13 04:45:21.891028 | orchestrator | "osdmap-prune", 2026-02-13 04:45:21.891038 | orchestrator | "nautilus", 2026-02-13 04:45:21.891049 | orchestrator | "octopus", 2026-02-13 04:45:21.891060 | orchestrator | "pacific", 2026-02-13 04:45:21.891070 | orchestrator | "elector-pinging", 2026-02-13 04:45:21.891081 | orchestrator | "quincy", 2026-02-13 04:45:21.891092 | orchestrator | "reef" 2026-02-13 04:45:21.891103 | orchestrator | ] 2026-02-13 04:45:21.891113 | orchestrator | }, 2026-02-13 04:45:21.891124 | orchestrator | "monmap": { 2026-02-13 04:45:21.891135 | orchestrator | "epoch": 1, 2026-02-13 04:45:21.891146 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-13 04:45:21.891158 | orchestrator | "modified": "2026-02-13T03:36:27.538999Z", 2026-02-13 04:45:21.891169 | orchestrator | "created": "2026-02-13T03:36:27.538999Z", 2026-02-13 04:45:21.891180 | orchestrator | "min_mon_release": 18, 2026-02-13 04:45:21.891191 | orchestrator | "min_mon_release_name": "reef", 2026-02-13 04:45:21.891202 | orchestrator | "election_strategy": 1, 2026-02-13 04:45:21.891213 | orchestrator | "disallowed_leaders: ": "", 2026-02-13 04:45:21.891223 | orchestrator | "stretch_mode": false, 2026-02-13 04:45:21.891261 | orchestrator | "tiebreaker_mon": "", 2026-02-13 04:45:21.891273 | orchestrator | "removed_ranks: ": "", 2026-02-13 04:45:21.891285 | orchestrator | "features": { 2026-02-13 04:45:21.891297 | orchestrator | "persistent": [ 2026-02-13 04:45:21.891309 | orchestrator | "kraken", 2026-02-13 04:45:21.891346 | orchestrator | "luminous", 2026-02-13 04:45:21.891359 | orchestrator | "mimic", 2026-02-13 04:45:21.891370 | orchestrator | "osdmap-prune", 2026-02-13 04:45:21.891382 | orchestrator | "nautilus", 2026-02-13 04:45:21.891395 | orchestrator | "octopus", 2026-02-13 04:45:21.891407 | orchestrator | "pacific", 2026-02-13 04:45:21.891420 | orchestrator | "elector-pinging", 2026-02-13 04:45:21.891431 | orchestrator | "quincy", 2026-02-13 04:45:21.891444 | orchestrator | "reef" 2026-02-13 04:45:21.891456 | orchestrator | ], 2026-02-13 04:45:21.891468 | orchestrator | "optional": [] 2026-02-13 04:45:21.891479 | orchestrator | }, 2026-02-13 04:45:21.891490 | orchestrator | "mons": [ 2026-02-13 04:45:21.891518 | orchestrator | { 2026-02-13 04:45:21.891530 | orchestrator | "rank": 0, 2026-02-13 04:45:21.891541 | orchestrator | "name": "testbed-node-1", 2026-02-13 04:45:21.891552 | orchestrator | "public_addrs": { 2026-02-13 04:45:21.891562 | orchestrator | "addrvec": [ 2026-02-13 04:45:21.891573 | orchestrator | { 2026-02-13 04:45:21.891584 | orchestrator | "type": "v2", 2026-02-13 04:45:21.891596 | orchestrator | "addr": "192.168.16.8:3300", 2026-02-13 04:45:21.891606 | orchestrator | "nonce": 0 2026-02-13 04:45:21.891617 | orchestrator | }, 2026-02-13 04:45:21.891628 | orchestrator | { 2026-02-13 04:45:21.891639 | orchestrator | "type": "v1", 2026-02-13 04:45:21.891650 | orchestrator | "addr": "192.168.16.8:6789", 2026-02-13 04:45:21.891661 | orchestrator | "nonce": 0 2026-02-13 04:45:21.891671 | orchestrator | } 2026-02-13 04:45:21.891682 | orchestrator | ] 2026-02-13 04:45:21.891693 | orchestrator | }, 2026-02-13 04:45:21.891704 | orchestrator | "addr": "192.168.16.8:6789/0", 2026-02-13 04:45:21.891714 | orchestrator | "public_addr": "192.168.16.8:6789/0", 2026-02-13 04:45:21.891725 | orchestrator | "priority": 0, 2026-02-13 04:45:21.891736 | orchestrator | "weight": 0, 2026-02-13 04:45:21.891747 | orchestrator | "crush_location": "{}" 2026-02-13 04:45:21.891757 | orchestrator | }, 2026-02-13 04:45:21.891768 | orchestrator | { 2026-02-13 04:45:21.891779 | orchestrator | "rank": 1, 2026-02-13 04:45:21.891790 | orchestrator | "name": "testbed-node-0", 2026-02-13 04:45:21.891801 | orchestrator | "public_addrs": { 2026-02-13 04:45:21.891811 | orchestrator | "addrvec": [ 2026-02-13 04:45:21.891822 | orchestrator | { 2026-02-13 04:45:21.891833 | orchestrator | "type": "v2", 2026-02-13 04:45:21.891844 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-13 04:45:21.891855 | orchestrator | "nonce": 0 2026-02-13 04:45:21.891865 | orchestrator | }, 2026-02-13 04:45:21.891876 | orchestrator | { 2026-02-13 04:45:21.891887 | orchestrator | "type": "v1", 2026-02-13 04:45:21.891898 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-13 04:45:21.891908 | orchestrator | "nonce": 0 2026-02-13 04:45:21.891919 | orchestrator | } 2026-02-13 04:45:21.891930 | orchestrator | ] 2026-02-13 04:45:21.891941 | orchestrator | }, 2026-02-13 04:45:21.891951 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-13 04:45:21.891962 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-13 04:45:21.891973 | orchestrator | "priority": 0, 2026-02-13 04:45:21.891984 | orchestrator | "weight": 0, 2026-02-13 04:45:21.891994 | orchestrator | "crush_location": "{}" 2026-02-13 04:45:21.892005 | orchestrator | }, 2026-02-13 04:45:21.892016 | orchestrator | { 2026-02-13 04:45:21.892026 | orchestrator | "rank": 2, 2026-02-13 04:45:21.892037 | orchestrator | "name": "testbed-node-2", 2026-02-13 04:45:21.892048 | orchestrator | "public_addrs": { 2026-02-13 04:45:21.892059 | orchestrator | "addrvec": [ 2026-02-13 04:45:21.892070 | orchestrator | { 2026-02-13 04:45:21.892080 | orchestrator | "type": "v2", 2026-02-13 04:45:21.892091 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-13 04:45:21.892102 | orchestrator | "nonce": 0 2026-02-13 04:45:21.892113 | orchestrator | }, 2026-02-13 04:45:21.892123 | orchestrator | { 2026-02-13 04:45:21.892134 | orchestrator | "type": "v1", 2026-02-13 04:45:21.892145 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-13 04:45:21.892155 | orchestrator | "nonce": 0 2026-02-13 04:45:21.892166 | orchestrator | } 2026-02-13 04:45:21.892177 | orchestrator | ] 2026-02-13 04:45:21.892187 | orchestrator | }, 2026-02-13 04:45:21.892198 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-13 04:45:21.892209 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-13 04:45:21.892220 | orchestrator | "priority": 0, 2026-02-13 04:45:21.892268 | orchestrator | "weight": 0, 2026-02-13 04:45:21.892280 | orchestrator | "crush_location": "{}" 2026-02-13 04:45:21.892290 | orchestrator | } 2026-02-13 04:45:21.892301 | orchestrator | ] 2026-02-13 04:45:21.892311 | orchestrator | } 2026-02-13 04:45:21.892322 | orchestrator | } 2026-02-13 04:45:21.892333 | orchestrator | 2026-02-13 04:45:21.892344 | orchestrator | # Ceph free space status 2026-02-13 04:45:21.892354 | orchestrator | 2026-02-13 04:45:21.892365 | orchestrator | + echo 2026-02-13 04:45:21.892376 | orchestrator | + echo '# Ceph free space status' 2026-02-13 04:45:21.892387 | orchestrator | + echo 2026-02-13 04:45:21.892397 | orchestrator | + ceph df 2026-02-13 04:45:22.469872 | orchestrator | --- RAW STORAGE --- 2026-02-13 04:45:22.469982 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-13 04:45:22.470012 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-13 04:45:22.470110 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.88 2026-02-13 04:45:22.470133 | orchestrator | 2026-02-13 04:45:22.470154 | orchestrator | --- POOLS --- 2026-02-13 04:45:22.470173 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-13 04:45:22.470194 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-13 04:45:22.470215 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-13 04:45:22.470252 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-13 04:45:22.470264 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-13 04:45:22.470275 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-13 04:45:22.470286 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-13 04:45:22.470297 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-13 04:45:22.470308 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-13 04:45:22.470319 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-13 04:45:22.470330 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-13 04:45:22.470340 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-13 04:45:22.470351 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2026-02-13 04:45:22.470362 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-13 04:45:22.470372 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-13 04:45:22.510944 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-13 04:45:22.567975 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-13 04:45:22.568067 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-13 04:45:22.568082 | orchestrator | + osism apply facts 2026-02-13 04:45:31.255064 | orchestrator | 2026-02-13 04:45:31 | INFO  | Task deebb8f8-bbbc-42d0-8da9-9d3305ed779a (facts) was prepared for execution. 2026-02-13 04:45:31.255168 | orchestrator | 2026-02-13 04:45:31 | INFO  | It takes a moment until task deebb8f8-bbbc-42d0-8da9-9d3305ed779a (facts) has been started and output is visible here. 2026-02-13 04:45:45.047753 | orchestrator | 2026-02-13 04:45:45.047876 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-13 04:45:45.047892 | orchestrator | 2026-02-13 04:45:45.047901 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-13 04:45:45.047908 | orchestrator | Friday 13 February 2026 04:45:35 +0000 (0:00:00.279) 0:00:00.279 ******* 2026-02-13 04:45:45.047916 | orchestrator | ok: [testbed-manager] 2026-02-13 04:45:45.047924 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:45:45.047932 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:45:45.047939 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:45:45.047947 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:45:45.047954 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:45:45.047961 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:45:45.047968 | orchestrator | 2026-02-13 04:45:45.047976 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-13 04:45:45.048012 | orchestrator | Friday 13 February 2026 04:45:37 +0000 (0:00:01.242) 0:00:01.521 ******* 2026-02-13 04:45:45.048023 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:45:45.048035 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:45:45.048049 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:45:45.048061 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:45:45.048072 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:45:45.048085 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:45:45.048096 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:45:45.048108 | orchestrator | 2026-02-13 04:45:45.048120 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 04:45:45.048134 | orchestrator | 2026-02-13 04:45:45.048147 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 04:45:45.048159 | orchestrator | Friday 13 February 2026 04:45:38 +0000 (0:00:01.362) 0:00:02.884 ******* 2026-02-13 04:45:45.048171 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:45:45.048184 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:45:45.048196 | orchestrator | ok: [testbed-manager] 2026-02-13 04:45:45.048205 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:45:45.048212 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:45:45.048219 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:45:45.048226 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:45:45.048315 | orchestrator | 2026-02-13 04:45:45.048327 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-13 04:45:45.048335 | orchestrator | 2026-02-13 04:45:45.048345 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-13 04:45:45.048354 | orchestrator | Friday 13 February 2026 04:45:44 +0000 (0:00:05.650) 0:00:08.535 ******* 2026-02-13 04:45:45.048363 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:45:45.048372 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:45:45.048380 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:45:45.048388 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:45:45.048397 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:45:45.048406 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:45:45.048414 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:45:45.048423 | orchestrator | 2026-02-13 04:45:45.048432 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:45:45.048441 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048451 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048459 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048482 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048492 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048500 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048508 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:45:45.048516 | orchestrator | 2026-02-13 04:45:45.048525 | orchestrator | 2026-02-13 04:45:45.048533 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:45:45.048547 | orchestrator | Friday 13 February 2026 04:45:44 +0000 (0:00:00.547) 0:00:09.083 ******* 2026-02-13 04:45:45.048559 | orchestrator | =============================================================================== 2026-02-13 04:45:45.048572 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.65s 2026-02-13 04:45:45.048597 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-02-13 04:45:45.048609 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.24s 2026-02-13 04:45:45.048621 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-02-13 04:45:45.356350 | orchestrator | + osism validate ceph-mons 2026-02-13 04:46:17.654491 | orchestrator | 2026-02-13 04:46:17.654645 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-13 04:46:17.654664 | orchestrator | 2026-02-13 04:46:17.654677 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-13 04:46:17.654689 | orchestrator | Friday 13 February 2026 04:46:02 +0000 (0:00:00.443) 0:00:00.443 ******* 2026-02-13 04:46:17.654701 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.654714 | orchestrator | 2026-02-13 04:46:17.654733 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-13 04:46:17.654752 | orchestrator | Friday 13 February 2026 04:46:02 +0000 (0:00:00.858) 0:00:01.302 ******* 2026-02-13 04:46:17.654772 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.654791 | orchestrator | 2026-02-13 04:46:17.654811 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-13 04:46:17.654830 | orchestrator | Friday 13 February 2026 04:46:03 +0000 (0:00:01.005) 0:00:02.307 ******* 2026-02-13 04:46:17.654850 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.654870 | orchestrator | 2026-02-13 04:46:17.654890 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-13 04:46:17.654910 | orchestrator | Friday 13 February 2026 04:46:04 +0000 (0:00:00.182) 0:00:02.489 ******* 2026-02-13 04:46:17.654931 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.654950 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:17.654961 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:17.654972 | orchestrator | 2026-02-13 04:46:17.654984 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-13 04:46:17.654995 | orchestrator | Friday 13 February 2026 04:46:04 +0000 (0:00:00.309) 0:00:02.799 ******* 2026-02-13 04:46:17.655006 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:17.655017 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.655035 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:17.655054 | orchestrator | 2026-02-13 04:46:17.655072 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-13 04:46:17.655092 | orchestrator | Friday 13 February 2026 04:46:05 +0000 (0:00:01.002) 0:00:03.801 ******* 2026-02-13 04:46:17.655110 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655130 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:46:17.655150 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:46:17.655162 | orchestrator | 2026-02-13 04:46:17.655173 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-13 04:46:17.655184 | orchestrator | Friday 13 February 2026 04:46:05 +0000 (0:00:00.295) 0:00:04.097 ******* 2026-02-13 04:46:17.655195 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.655206 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:17.655217 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:17.655228 | orchestrator | 2026-02-13 04:46:17.655240 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:46:17.655291 | orchestrator | Friday 13 February 2026 04:46:06 +0000 (0:00:00.524) 0:00:04.622 ******* 2026-02-13 04:46:17.655310 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.655331 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:17.655345 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:17.655355 | orchestrator | 2026-02-13 04:46:17.655367 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-13 04:46:17.655378 | orchestrator | Friday 13 February 2026 04:46:06 +0000 (0:00:00.297) 0:00:04.919 ******* 2026-02-13 04:46:17.655389 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655431 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:46:17.655442 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:46:17.655453 | orchestrator | 2026-02-13 04:46:17.655464 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-13 04:46:17.655475 | orchestrator | Friday 13 February 2026 04:46:06 +0000 (0:00:00.279) 0:00:05.198 ******* 2026-02-13 04:46:17.655485 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.655496 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:17.655506 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:17.655518 | orchestrator | 2026-02-13 04:46:17.655529 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:46:17.655540 | orchestrator | Friday 13 February 2026 04:46:07 +0000 (0:00:00.518) 0:00:05.717 ******* 2026-02-13 04:46:17.655550 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655561 | orchestrator | 2026-02-13 04:46:17.655571 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:46:17.655583 | orchestrator | Friday 13 February 2026 04:46:07 +0000 (0:00:00.254) 0:00:05.971 ******* 2026-02-13 04:46:17.655593 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655604 | orchestrator | 2026-02-13 04:46:17.655615 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:46:17.655626 | orchestrator | Friday 13 February 2026 04:46:07 +0000 (0:00:00.264) 0:00:06.235 ******* 2026-02-13 04:46:17.655636 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655647 | orchestrator | 2026-02-13 04:46:17.655658 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:17.655668 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.256) 0:00:06.492 ******* 2026-02-13 04:46:17.655679 | orchestrator | 2026-02-13 04:46:17.655690 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:17.655700 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.073) 0:00:06.566 ******* 2026-02-13 04:46:17.655711 | orchestrator | 2026-02-13 04:46:17.655722 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:17.655732 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.110) 0:00:06.677 ******* 2026-02-13 04:46:17.655743 | orchestrator | 2026-02-13 04:46:17.655754 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:46:17.655765 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.075) 0:00:06.752 ******* 2026-02-13 04:46:17.655775 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655786 | orchestrator | 2026-02-13 04:46:17.655797 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-13 04:46:17.655826 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.237) 0:00:06.989 ******* 2026-02-13 04:46:17.655838 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.655850 | orchestrator | 2026-02-13 04:46:17.655893 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-13 04:46:17.655914 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.242) 0:00:07.232 ******* 2026-02-13 04:46:17.655933 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.655952 | orchestrator | 2026-02-13 04:46:17.655964 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-13 04:46:17.655975 | orchestrator | Friday 13 February 2026 04:46:08 +0000 (0:00:00.120) 0:00:07.352 ******* 2026-02-13 04:46:17.655985 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:46:17.656001 | orchestrator | 2026-02-13 04:46:17.656012 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-13 04:46:17.656023 | orchestrator | Friday 13 February 2026 04:46:10 +0000 (0:00:01.552) 0:00:08.905 ******* 2026-02-13 04:46:17.656034 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656044 | orchestrator | 2026-02-13 04:46:17.656055 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-13 04:46:17.656066 | orchestrator | Friday 13 February 2026 04:46:11 +0000 (0:00:00.518) 0:00:09.423 ******* 2026-02-13 04:46:17.656086 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656097 | orchestrator | 2026-02-13 04:46:17.656108 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-13 04:46:17.656119 | orchestrator | Friday 13 February 2026 04:46:11 +0000 (0:00:00.138) 0:00:09.562 ******* 2026-02-13 04:46:17.656130 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656140 | orchestrator | 2026-02-13 04:46:17.656151 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-13 04:46:17.656162 | orchestrator | Friday 13 February 2026 04:46:11 +0000 (0:00:00.343) 0:00:09.906 ******* 2026-02-13 04:46:17.656173 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656183 | orchestrator | 2026-02-13 04:46:17.656194 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-13 04:46:17.656205 | orchestrator | Friday 13 February 2026 04:46:11 +0000 (0:00:00.300) 0:00:10.206 ******* 2026-02-13 04:46:17.656215 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656226 | orchestrator | 2026-02-13 04:46:17.656237 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-13 04:46:17.656247 | orchestrator | Friday 13 February 2026 04:46:11 +0000 (0:00:00.121) 0:00:10.327 ******* 2026-02-13 04:46:17.656289 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656300 | orchestrator | 2026-02-13 04:46:17.656311 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-13 04:46:17.656322 | orchestrator | Friday 13 February 2026 04:46:12 +0000 (0:00:00.136) 0:00:10.464 ******* 2026-02-13 04:46:17.656333 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656344 | orchestrator | 2026-02-13 04:46:17.656355 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-13 04:46:17.656366 | orchestrator | Friday 13 February 2026 04:46:12 +0000 (0:00:00.112) 0:00:10.577 ******* 2026-02-13 04:46:17.656377 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:46:17.656388 | orchestrator | 2026-02-13 04:46:17.656399 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-13 04:46:17.656410 | orchestrator | Friday 13 February 2026 04:46:13 +0000 (0:00:01.232) 0:00:11.810 ******* 2026-02-13 04:46:17.656421 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656450 | orchestrator | 2026-02-13 04:46:17.656461 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-13 04:46:17.656472 | orchestrator | Friday 13 February 2026 04:46:13 +0000 (0:00:00.305) 0:00:12.115 ******* 2026-02-13 04:46:17.656483 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656493 | orchestrator | 2026-02-13 04:46:17.656505 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-13 04:46:17.656515 | orchestrator | Friday 13 February 2026 04:46:13 +0000 (0:00:00.137) 0:00:12.253 ******* 2026-02-13 04:46:17.656526 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:17.656537 | orchestrator | 2026-02-13 04:46:17.656548 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-13 04:46:17.656559 | orchestrator | Friday 13 February 2026 04:46:14 +0000 (0:00:00.159) 0:00:12.413 ******* 2026-02-13 04:46:17.656570 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656581 | orchestrator | 2026-02-13 04:46:17.656592 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-13 04:46:17.656603 | orchestrator | Friday 13 February 2026 04:46:14 +0000 (0:00:00.115) 0:00:12.529 ******* 2026-02-13 04:46:17.656620 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656633 | orchestrator | 2026-02-13 04:46:17.656651 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-13 04:46:17.656663 | orchestrator | Friday 13 February 2026 04:46:14 +0000 (0:00:00.329) 0:00:12.859 ******* 2026-02-13 04:46:17.656673 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.656684 | orchestrator | 2026-02-13 04:46:17.656695 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-13 04:46:17.656706 | orchestrator | Friday 13 February 2026 04:46:14 +0000 (0:00:00.272) 0:00:13.131 ******* 2026-02-13 04:46:17.656724 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:17.656735 | orchestrator | 2026-02-13 04:46:17.656745 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:46:17.656756 | orchestrator | Friday 13 February 2026 04:46:15 +0000 (0:00:00.242) 0:00:13.374 ******* 2026-02-13 04:46:17.656767 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.656778 | orchestrator | 2026-02-13 04:46:17.656789 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:46:17.656800 | orchestrator | Friday 13 February 2026 04:46:16 +0000 (0:00:01.811) 0:00:15.186 ******* 2026-02-13 04:46:17.656810 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.656821 | orchestrator | 2026-02-13 04:46:17.656832 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:46:17.656843 | orchestrator | Friday 13 February 2026 04:46:17 +0000 (0:00:00.284) 0:00:15.471 ******* 2026-02-13 04:46:17.656854 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:17.656865 | orchestrator | 2026-02-13 04:46:17.656884 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:20.443764 | orchestrator | Friday 13 February 2026 04:46:17 +0000 (0:00:00.289) 0:00:15.761 ******* 2026-02-13 04:46:20.443909 | orchestrator | 2026-02-13 04:46:20.443938 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:20.443960 | orchestrator | Friday 13 February 2026 04:46:17 +0000 (0:00:00.083) 0:00:15.844 ******* 2026-02-13 04:46:20.443978 | orchestrator | 2026-02-13 04:46:20.443998 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:20.444017 | orchestrator | Friday 13 February 2026 04:46:17 +0000 (0:00:00.079) 0:00:15.924 ******* 2026-02-13 04:46:20.444036 | orchestrator | 2026-02-13 04:46:20.444055 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-13 04:46:20.444074 | orchestrator | Friday 13 February 2026 04:46:17 +0000 (0:00:00.073) 0:00:15.997 ******* 2026-02-13 04:46:20.444094 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:20.444114 | orchestrator | 2026-02-13 04:46:20.444131 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:46:20.444164 | orchestrator | Friday 13 February 2026 04:46:19 +0000 (0:00:01.613) 0:00:17.610 ******* 2026-02-13 04:46:20.444184 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-13 04:46:20.444204 | orchestrator |  "msg": [ 2026-02-13 04:46:20.444226 | orchestrator |  "Validator run completed.", 2026-02-13 04:46:20.444246 | orchestrator |  "You can find the report file here:", 2026-02-13 04:46:20.444296 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-13T04:46:02+00:00-report.json", 2026-02-13 04:46:20.444317 | orchestrator |  "on the following host:", 2026-02-13 04:46:20.444335 | orchestrator |  "testbed-manager" 2026-02-13 04:46:20.444354 | orchestrator |  ] 2026-02-13 04:46:20.444374 | orchestrator | } 2026-02-13 04:46:20.444394 | orchestrator | 2026-02-13 04:46:20.444412 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:46:20.444436 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-13 04:46:20.444457 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:46:20.444478 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:46:20.444491 | orchestrator | 2026-02-13 04:46:20.444503 | orchestrator | 2026-02-13 04:46:20.444517 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:46:20.444529 | orchestrator | Friday 13 February 2026 04:46:20 +0000 (0:00:00.831) 0:00:18.441 ******* 2026-02-13 04:46:20.444574 | orchestrator | =============================================================================== 2026-02-13 04:46:20.444587 | orchestrator | Aggregate test results step one ----------------------------------------- 1.81s 2026-02-13 04:46:20.444600 | orchestrator | Write report file ------------------------------------------------------- 1.61s 2026-02-13 04:46:20.444612 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.55s 2026-02-13 04:46:20.444623 | orchestrator | Gather status data ------------------------------------------------------ 1.23s 2026-02-13 04:46:20.444633 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2026-02-13 04:46:20.444644 | orchestrator | Get container info ------------------------------------------------------ 1.00s 2026-02-13 04:46:20.444656 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-02-13 04:46:20.444676 | orchestrator | Print report file information ------------------------------------------- 0.83s 2026-02-13 04:46:20.444694 | orchestrator | Set test result to passed if container is existing ---------------------- 0.52s 2026-02-13 04:46:20.444711 | orchestrator | Set quorum test data ---------------------------------------------------- 0.52s 2026-02-13 04:46:20.444746 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.52s 2026-02-13 04:46:20.444763 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2026-02-13 04:46:20.444779 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2026-02-13 04:46:20.444797 | orchestrator | Prepare test data for container existance test -------------------------- 0.31s 2026-02-13 04:46:20.444812 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2026-02-13 04:46:20.444828 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2026-02-13 04:46:20.444844 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2026-02-13 04:46:20.444860 | orchestrator | Set test result to failed if container is missing ----------------------- 0.30s 2026-02-13 04:46:20.444878 | orchestrator | Aggregate test results step three --------------------------------------- 0.29s 2026-02-13 04:46:20.444895 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2026-02-13 04:46:20.811941 | orchestrator | + osism validate ceph-mgrs 2026-02-13 04:46:51.970508 | orchestrator | 2026-02-13 04:46:51.970613 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-13 04:46:51.970630 | orchestrator | 2026-02-13 04:46:51.970643 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-13 04:46:51.970655 | orchestrator | Friday 13 February 2026 04:46:37 +0000 (0:00:00.431) 0:00:00.431 ******* 2026-02-13 04:46:51.970666 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.970677 | orchestrator | 2026-02-13 04:46:51.970689 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-13 04:46:51.970700 | orchestrator | Friday 13 February 2026 04:46:38 +0000 (0:00:00.824) 0:00:01.255 ******* 2026-02-13 04:46:51.970711 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.970722 | orchestrator | 2026-02-13 04:46:51.970732 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-13 04:46:51.970743 | orchestrator | Friday 13 February 2026 04:46:39 +0000 (0:00:01.009) 0:00:02.265 ******* 2026-02-13 04:46:51.970754 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.970766 | orchestrator | 2026-02-13 04:46:51.970777 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-13 04:46:51.970788 | orchestrator | Friday 13 February 2026 04:46:39 +0000 (0:00:00.119) 0:00:02.385 ******* 2026-02-13 04:46:51.970799 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.970809 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:51.970820 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:51.970831 | orchestrator | 2026-02-13 04:46:51.970842 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-13 04:46:51.970853 | orchestrator | Friday 13 February 2026 04:46:39 +0000 (0:00:00.301) 0:00:02.686 ******* 2026-02-13 04:46:51.970888 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:51.970899 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.970910 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:51.970921 | orchestrator | 2026-02-13 04:46:51.970932 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-13 04:46:51.970942 | orchestrator | Friday 13 February 2026 04:46:40 +0000 (0:00:01.054) 0:00:03.740 ******* 2026-02-13 04:46:51.970954 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.970965 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:46:51.970975 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:46:51.970986 | orchestrator | 2026-02-13 04:46:51.970997 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-13 04:46:51.971008 | orchestrator | Friday 13 February 2026 04:46:41 +0000 (0:00:00.309) 0:00:04.050 ******* 2026-02-13 04:46:51.971019 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971030 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:51.971041 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:51.971052 | orchestrator | 2026-02-13 04:46:51.971063 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:46:51.971074 | orchestrator | Friday 13 February 2026 04:46:41 +0000 (0:00:00.509) 0:00:04.560 ******* 2026-02-13 04:46:51.971085 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971095 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:51.971106 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:51.971117 | orchestrator | 2026-02-13 04:46:51.971127 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-13 04:46:51.971138 | orchestrator | Friday 13 February 2026 04:46:42 +0000 (0:00:00.329) 0:00:04.889 ******* 2026-02-13 04:46:51.971149 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971160 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:46:51.971171 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:46:51.971181 | orchestrator | 2026-02-13 04:46:51.971192 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-13 04:46:51.971203 | orchestrator | Friday 13 February 2026 04:46:42 +0000 (0:00:00.310) 0:00:05.200 ******* 2026-02-13 04:46:51.971214 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971225 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:46:51.971235 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:46:51.971246 | orchestrator | 2026-02-13 04:46:51.971257 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:46:51.971294 | orchestrator | Friday 13 February 2026 04:46:42 +0000 (0:00:00.468) 0:00:05.668 ******* 2026-02-13 04:46:51.971307 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971318 | orchestrator | 2026-02-13 04:46:51.971329 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:46:51.971340 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.249) 0:00:05.918 ******* 2026-02-13 04:46:51.971350 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971361 | orchestrator | 2026-02-13 04:46:51.971372 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:46:51.971382 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.269) 0:00:06.187 ******* 2026-02-13 04:46:51.971393 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971403 | orchestrator | 2026-02-13 04:46:51.971414 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.971425 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.239) 0:00:06.427 ******* 2026-02-13 04:46:51.971436 | orchestrator | 2026-02-13 04:46:51.971447 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.971457 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.072) 0:00:06.499 ******* 2026-02-13 04:46:51.971468 | orchestrator | 2026-02-13 04:46:51.971479 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.971489 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.070) 0:00:06.569 ******* 2026-02-13 04:46:51.971508 | orchestrator | 2026-02-13 04:46:51.971519 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:46:51.971530 | orchestrator | Friday 13 February 2026 04:46:43 +0000 (0:00:00.078) 0:00:06.648 ******* 2026-02-13 04:46:51.971541 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971552 | orchestrator | 2026-02-13 04:46:51.971562 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-13 04:46:51.971573 | orchestrator | Friday 13 February 2026 04:46:44 +0000 (0:00:00.253) 0:00:06.901 ******* 2026-02-13 04:46:51.971584 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971595 | orchestrator | 2026-02-13 04:46:51.971625 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-13 04:46:51.971637 | orchestrator | Friday 13 February 2026 04:46:44 +0000 (0:00:00.239) 0:00:07.141 ******* 2026-02-13 04:46:51.971647 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971658 | orchestrator | 2026-02-13 04:46:51.971669 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-13 04:46:51.971681 | orchestrator | Friday 13 February 2026 04:46:44 +0000 (0:00:00.115) 0:00:07.256 ******* 2026-02-13 04:46:51.971691 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:46:51.971702 | orchestrator | 2026-02-13 04:46:51.971713 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-13 04:46:51.971724 | orchestrator | Friday 13 February 2026 04:46:46 +0000 (0:00:02.000) 0:00:09.257 ******* 2026-02-13 04:46:51.971735 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971746 | orchestrator | 2026-02-13 04:46:51.971774 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-13 04:46:51.971786 | orchestrator | Friday 13 February 2026 04:46:46 +0000 (0:00:00.431) 0:00:09.688 ******* 2026-02-13 04:46:51.971797 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971808 | orchestrator | 2026-02-13 04:46:51.971818 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-13 04:46:51.971829 | orchestrator | Friday 13 February 2026 04:46:47 +0000 (0:00:00.310) 0:00:09.998 ******* 2026-02-13 04:46:51.971840 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971851 | orchestrator | 2026-02-13 04:46:51.971862 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-13 04:46:51.971872 | orchestrator | Friday 13 February 2026 04:46:47 +0000 (0:00:00.128) 0:00:10.127 ******* 2026-02-13 04:46:51.971883 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:46:51.971894 | orchestrator | 2026-02-13 04:46:51.971904 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-13 04:46:51.971915 | orchestrator | Friday 13 February 2026 04:46:47 +0000 (0:00:00.147) 0:00:10.274 ******* 2026-02-13 04:46:51.971926 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.971937 | orchestrator | 2026-02-13 04:46:51.971948 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-13 04:46:51.971959 | orchestrator | Friday 13 February 2026 04:46:47 +0000 (0:00:00.250) 0:00:10.524 ******* 2026-02-13 04:46:51.971970 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:46:51.971980 | orchestrator | 2026-02-13 04:46:51.971991 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:46:51.972002 | orchestrator | Friday 13 February 2026 04:46:47 +0000 (0:00:00.266) 0:00:10.791 ******* 2026-02-13 04:46:51.972013 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.972024 | orchestrator | 2026-02-13 04:46:51.972034 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:46:51.972045 | orchestrator | Friday 13 February 2026 04:46:49 +0000 (0:00:01.344) 0:00:12.135 ******* 2026-02-13 04:46:51.972056 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.972067 | orchestrator | 2026-02-13 04:46:51.972077 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:46:51.972088 | orchestrator | Friday 13 February 2026 04:46:49 +0000 (0:00:00.274) 0:00:12.410 ******* 2026-02-13 04:46:51.972105 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.972116 | orchestrator | 2026-02-13 04:46:51.972127 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.972138 | orchestrator | Friday 13 February 2026 04:46:49 +0000 (0:00:00.267) 0:00:12.677 ******* 2026-02-13 04:46:51.972149 | orchestrator | 2026-02-13 04:46:51.972160 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.972170 | orchestrator | Friday 13 February 2026 04:46:49 +0000 (0:00:00.070) 0:00:12.747 ******* 2026-02-13 04:46:51.972181 | orchestrator | 2026-02-13 04:46:51.972192 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:46:51.972203 | orchestrator | Friday 13 February 2026 04:46:49 +0000 (0:00:00.069) 0:00:12.817 ******* 2026-02-13 04:46:51.972213 | orchestrator | 2026-02-13 04:46:51.972224 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-13 04:46:51.972235 | orchestrator | Friday 13 February 2026 04:46:50 +0000 (0:00:00.256) 0:00:13.073 ******* 2026-02-13 04:46:51.972245 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-13 04:46:51.972256 | orchestrator | 2026-02-13 04:46:51.972305 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:46:51.972327 | orchestrator | Friday 13 February 2026 04:46:51 +0000 (0:00:01.314) 0:00:14.387 ******* 2026-02-13 04:46:51.972346 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-13 04:46:51.972366 | orchestrator |  "msg": [ 2026-02-13 04:46:51.972387 | orchestrator |  "Validator run completed.", 2026-02-13 04:46:51.972414 | orchestrator |  "You can find the report file here:", 2026-02-13 04:46:51.972435 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-13T04:46:38+00:00-report.json", 2026-02-13 04:46:51.972450 | orchestrator |  "on the following host:", 2026-02-13 04:46:51.972461 | orchestrator |  "testbed-manager" 2026-02-13 04:46:51.972472 | orchestrator |  ] 2026-02-13 04:46:51.972483 | orchestrator | } 2026-02-13 04:46:51.972494 | orchestrator | 2026-02-13 04:46:51.972505 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:46:51.972517 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-13 04:46:51.972529 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:46:51.972550 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:46:52.296332 | orchestrator | 2026-02-13 04:46:52.296463 | orchestrator | 2026-02-13 04:46:52.296481 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:46:52.296495 | orchestrator | Friday 13 February 2026 04:46:51 +0000 (0:00:00.442) 0:00:14.830 ******* 2026-02-13 04:46:52.296506 | orchestrator | =============================================================================== 2026-02-13 04:46:52.296518 | orchestrator | Gather list of mgr modules ---------------------------------------------- 2.00s 2026-02-13 04:46:52.296528 | orchestrator | Aggregate test results step one ----------------------------------------- 1.34s 2026-02-13 04:46:52.296539 | orchestrator | Write report file ------------------------------------------------------- 1.31s 2026-02-13 04:46:52.296550 | orchestrator | Get container info ------------------------------------------------------ 1.05s 2026-02-13 04:46:52.296561 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2026-02-13 04:46:52.296571 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-02-13 04:46:52.296582 | orchestrator | Set test result to passed if container is existing ---------------------- 0.51s 2026-02-13 04:46:52.296593 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.47s 2026-02-13 04:46:52.296632 | orchestrator | Print report file information ------------------------------------------- 0.44s 2026-02-13 04:46:52.296643 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.43s 2026-02-13 04:46:52.296654 | orchestrator | Flush handlers ---------------------------------------------------------- 0.40s 2026-02-13 04:46:52.296665 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2026-02-13 04:46:52.296675 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2026-02-13 04:46:52.296686 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.31s 2026-02-13 04:46:52.296697 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-13 04:46:52.296707 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-13 04:46:52.296718 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-13 04:46:52.296729 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2026-02-13 04:46:52.296739 | orchestrator | Aggregate test results step three --------------------------------------- 0.27s 2026-02-13 04:46:52.296750 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.27s 2026-02-13 04:46:52.596757 | orchestrator | + osism validate ceph-osds 2026-02-13 04:47:14.004967 | orchestrator | 2026-02-13 04:47:14.005090 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-13 04:47:14.005110 | orchestrator | 2026-02-13 04:47:14.005128 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-13 04:47:14.005147 | orchestrator | Friday 13 February 2026 04:47:09 +0000 (0:00:00.440) 0:00:00.440 ******* 2026-02-13 04:47:14.005166 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:14.005184 | orchestrator | 2026-02-13 04:47:14.005202 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-13 04:47:14.005222 | orchestrator | Friday 13 February 2026 04:47:10 +0000 (0:00:00.849) 0:00:01.290 ******* 2026-02-13 04:47:14.005242 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:14.005259 | orchestrator | 2026-02-13 04:47:14.005322 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-13 04:47:14.005345 | orchestrator | Friday 13 February 2026 04:47:10 +0000 (0:00:00.532) 0:00:01.823 ******* 2026-02-13 04:47:14.005364 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:14.005384 | orchestrator | 2026-02-13 04:47:14.005403 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-13 04:47:14.005422 | orchestrator | Friday 13 February 2026 04:47:11 +0000 (0:00:00.699) 0:00:02.522 ******* 2026-02-13 04:47:14.005437 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:14.005450 | orchestrator | 2026-02-13 04:47:14.005461 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-13 04:47:14.005473 | orchestrator | Friday 13 February 2026 04:47:11 +0000 (0:00:00.145) 0:00:02.668 ******* 2026-02-13 04:47:14.005487 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:14.005499 | orchestrator | 2026-02-13 04:47:14.005512 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-13 04:47:14.005524 | orchestrator | Friday 13 February 2026 04:47:11 +0000 (0:00:00.146) 0:00:02.815 ******* 2026-02-13 04:47:14.005537 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:14.005549 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:14.005562 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:14.005575 | orchestrator | 2026-02-13 04:47:14.005607 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-13 04:47:14.005621 | orchestrator | Friday 13 February 2026 04:47:12 +0000 (0:00:00.321) 0:00:03.136 ******* 2026-02-13 04:47:14.005633 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:14.005650 | orchestrator | 2026-02-13 04:47:14.005669 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-13 04:47:14.005718 | orchestrator | Friday 13 February 2026 04:47:12 +0000 (0:00:00.156) 0:00:03.293 ******* 2026-02-13 04:47:14.005738 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:14.005756 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:14.005776 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:14.005795 | orchestrator | 2026-02-13 04:47:14.005813 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-13 04:47:14.005829 | orchestrator | Friday 13 February 2026 04:47:12 +0000 (0:00:00.316) 0:00:03.610 ******* 2026-02-13 04:47:14.005840 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:14.005851 | orchestrator | 2026-02-13 04:47:14.005862 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:47:14.005872 | orchestrator | Friday 13 February 2026 04:47:13 +0000 (0:00:00.774) 0:00:04.384 ******* 2026-02-13 04:47:14.005883 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:14.005894 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:14.005904 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:14.005915 | orchestrator | 2026-02-13 04:47:14.005926 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-13 04:47:14.005937 | orchestrator | Friday 13 February 2026 04:47:13 +0000 (0:00:00.308) 0:00:04.693 ******* 2026-02-13 04:47:14.005950 | orchestrator | skipping: [testbed-node-3] => (item={'id': '173d30d7b72784a25fdef221e02737dece55957815ba5500b42bf91fb55e0061', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-13 04:47:14.005965 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a98f01524388947f01c899df11441a06efafc08986c93fa3775b77b890c28d8b', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.005978 | orchestrator | skipping: [testbed-node-3] => (item={'id': '51120246359b8ccb427f1be64c6fd53ef6bcca1ca673cf86ad2b5526e12adb85', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.005989 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29dab1d3c5136d554a575d22d0d5a1eace8775924660d2414adf968eff756165', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-13 04:47:14.006000 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd8f182783ec2e7588a4e9fa9f8b6fb4084a1f548a77c2ac17a6a27cb6d422ec4', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-13 04:47:14.006124 | orchestrator | skipping: [testbed-node-3] => (item={'id': '946f47e1f096cab5407d46cbfd8780fa775b35ebec367f63e30fa3496bcfab7a', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.006157 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e5bb0ae0e39bd7fe940e4b783528f0a77be06d0fbefd5c7134454bcd867e8fe1', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.006177 | orchestrator | skipping: [testbed-node-3] => (item={'id': '548c978e8e2f7ecfd787aa6db6b9c992e9ca35398d5c876992b8afff540bd774', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-13 04:47:14.006197 | orchestrator | skipping: [testbed-node-3] => (item={'id': '78506787fb0d1ba16b99bddac65e6e21b5f825d5907441807b08e240dcff3529', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.006232 | orchestrator | skipping: [testbed-node-3] => (item={'id': '985ab628937c6644ec2652f0c987ccaf86e0d3e2261fd1510d0d2a9de67b484a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.006253 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6248f79482b79866b51219bea3069d82d0b430b5b983bf6553e99b6e6efd9417', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.006272 | orchestrator | ok: [testbed-node-3] => (item={'id': '1a02dab3c6b566b9ace47b858b3485b704fb8e27c09ef9d816d8bc0be02c0b9a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:14.006356 | orchestrator | ok: [testbed-node-3] => (item={'id': 'a3b72a3d5d7fa1797f66b49332b9a567af128e757ee5fc9f96549ed4056ba668', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:14.006376 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6e570f37302b43369129f44006731b94b00b8403b429ba4c289c82382654512a', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.006393 | orchestrator | skipping: [testbed-node-3] => (item={'id': '431c0dcebcec977001f8284ef2a56aab6ea8e160cee851b305c8a964cdfe29fd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:14.006412 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd473c01ba7d3d52cfd5e57cb2d6495b878340fa04397d29db19f2ce76527f600', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:14.006431 | orchestrator | skipping: [testbed-node-3] => (item={'id': '90e29e7180abf9b1335850e6e344417f067e8464dda187012903925b29e2f3c7', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.006451 | orchestrator | skipping: [testbed-node-3] => (item={'id': '29a5f5b8aea1f16cdde16c9af90aa46ac213b2b85d7e0ddb28bc5448a4d2f85d', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.006469 | orchestrator | skipping: [testbed-node-4] => (item={'id': '964bfe2d658e8df71f1320f345a3e8a2dcaf258bad9f58225aef01302d430666', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-13 04:47:14.006490 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a0abba732907d3e69fc88bb41b0d1553b346bb6a61975fd594ffd8f3d7508e7e', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.006526 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e718a95cddee2ca4a1c5b52def48ce2a322a6f57fe7b09868cb58704520b65c8', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.174479 | orchestrator | skipping: [testbed-node-4] => (item={'id': '333b0f513b1a4beadd7559f0b4d74529cf140559d1863cbdee54c022062a20e7', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.174627 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd87c20269fc5bf38de22ff4297d7e976c1e15c6ce074d8d0e54fac2d8ae758fe', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-13 04:47:14.174658 | orchestrator | skipping: [testbed-node-4] => (item={'id': '79062c79c3e15762eec299740f8fdf70b19c36339498f6aee0f1e1a06732fa4e', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-13 04:47:14.174669 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f479aeb873002167f3ff74f14e378a7b402e0d2a9dbda0fb1f368122e6beeae4', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.174680 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1928e11ef1e451a08907183d7ce95889bb95f9d40860e0de21603b8fb82d24de', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.174688 | orchestrator | skipping: [testbed-node-4] => (item={'id': '51ec6ecbba6b4caf078160ff369e2a1473e305219c273afbfdf9e785768b6e27', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 47 minutes (healthy)'})  2026-02-13 04:47:14.174696 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7e926f9cdcaab30cbafd805ad9ab3b165dd252bee1db65714b9dddbb628a5084', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174704 | orchestrator | skipping: [testbed-node-4] => (item={'id': '15a3f926cbaf42fbfc5a1a46cb8437e1a2cb558d10ed9027da807ce0db8b9fad', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174712 | orchestrator | skipping: [testbed-node-4] => (item={'id': '215ec873964ce28e856569a99e558aa0af8bea587d0f68c0365ab8f7bccb2cd7', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174721 | orchestrator | ok: [testbed-node-4] => (item={'id': 'd15af1c8856ee5f1359b518ef4abf017fef7cc3a30f2594cedcc85881b909bc1', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:14.174729 | orchestrator | ok: [testbed-node-4] => (item={'id': 'becd04c984421a342478fc547b58599eeb314c42b3f81bd440ea6eb7f10b2b27', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:14.174737 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3086e26d04fa7257879a8d61fd65ccbd804080fa671afce153ef84baea1ec804', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174744 | orchestrator | skipping: [testbed-node-4] => (item={'id': '967b479add35d48ec43c9fc6dff858709cc454a0e73502bb2a985575b57e8c28', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:14.174752 | orchestrator | skipping: [testbed-node-4] => (item={'id': '81c73f34eab71691d766640f84b913a79ee153cd92960c2b2aba12d42be13c19', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:14.174775 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c225593396b1807d1b7b756b935950665f404715f8c4099d0983f200cab9f44', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.174788 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ffde37d445a690acc1aae8a3db250c1246cefb99d234f5b1653ac7a0c3cfea8c', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.174796 | orchestrator | skipping: [testbed-node-4] => (item={'id': '487245e255513b872c695b8c62e9cce7a3b3c4114ce02407d46c52ad3a12da3c', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:14.174803 | orchestrator | skipping: [testbed-node-5] => (item={'id': '901efc6dbbea741912099fcc0394640cf65fa9b4ad0b2f70f1ffa1ae5e47c8cd', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-13 04:47:14.174811 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a2b2c75d5a49f8af8ada5db6d5d88e2e69773c9dd70876e2e3f7a08d340bda6f', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.174822 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4a4b698b8d8b88005b8a69b09a724854d7e2d46a5c9789bd156901300cfd1713', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-13 04:47:14.174830 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa2dd65c65aa33af11a5ca775a65e9f4baf3d07603e894f0f16a2f88aa524d21', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-13 04:47:14.174837 | orchestrator | skipping: [testbed-node-5] => (item={'id': '40ad9ab50b9afcda5849a9bab79d8fced3f7ea5c2e520e1c6eaacfe1189c56b2', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 40 minutes (healthy)'})  2026-02-13 04:47:14.174845 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f2ed66f56069713714c8f5aacb5c52828ae9f5df1d702aefcbe4e1ba26e3d71', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.174852 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fa6d46d9112a3c3e6e707c23f0fbbfb151f057b5e9188534744cbf4e30ad5e83', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-13 04:47:14.174860 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eb900085f6fad9e01e406e6843f5add3b2ea6564aa33545d87d6b66a0d0bb0fa', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 48 minutes (healthy)'})  2026-02-13 04:47:14.174868 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd0182b2057ebf3b24fd9811f46c184999f3844015a4a11ff213b5b7157d60681', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174875 | orchestrator | skipping: [testbed-node-5] => (item={'id': '44d375062edf8456c9b0cb57cbcb4cf7b705b45703742b2a6c03defa9c0273c5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174883 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0f6812b67db969fab1cefcf17bc5c02c9955ac6e1beb7063a4337a43cafd59ce', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:14.174895 | orchestrator | ok: [testbed-node-5] => (item={'id': 'fb24ed1c3a5a363d8721e770b12eca583cb2bcb1f37d3648d921525585b61d8e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:14.174908 | orchestrator | ok: [testbed-node-5] => (item={'id': '5bcc06c4aad53fc889530e8bb27e2af74564151bdac5b56538c1d36d1602323e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-13 04:47:25.586618 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7d3378b1761611239eabac7a1c78db4462299ae916e37278b2d7624c10f9c8c5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-13 04:47:25.586726 | orchestrator | skipping: [testbed-node-5] => (item={'id': '5689637a51c20baff66df3694503aaf39a5a05c3ed58c60740e23df044f0e7d5', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:25.586739 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3e6b0a886497a55bce2ed2cf4b34483f21af275ae4c7194724ef740a22abcf51', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-13 04:47:25.586748 | orchestrator | skipping: [testbed-node-5] => (item={'id': '01192cabaa9abcefa7f4167a4344ebe7dd29284d77ac8d865a936bffa5e0e7b2', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:25.586771 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'eff22d4ed49b5810f819728f2ba2acd5bc6a1c3feba6dba190c5da40c3328891', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:25.586779 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0afe75f7beed2b0359753178b5ee2e85b5c32bd61b8d7848831ce266c5e678f0', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-13 04:47:25.586786 | orchestrator | 2026-02-13 04:47:25.586794 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-13 04:47:25.586801 | orchestrator | Friday 13 February 2026 04:47:14 +0000 (0:00:00.534) 0:00:05.227 ******* 2026-02-13 04:47:25.586807 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.586815 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.586821 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.586827 | orchestrator | 2026-02-13 04:47:25.586833 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-13 04:47:25.586874 | orchestrator | Friday 13 February 2026 04:47:14 +0000 (0:00:00.315) 0:00:05.543 ******* 2026-02-13 04:47:25.586883 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.586890 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:25.586897 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:25.586904 | orchestrator | 2026-02-13 04:47:25.586910 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-13 04:47:25.586917 | orchestrator | Friday 13 February 2026 04:47:15 +0000 (0:00:00.469) 0:00:06.012 ******* 2026-02-13 04:47:25.586924 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.586930 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.586936 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.586942 | orchestrator | 2026-02-13 04:47:25.586949 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:47:25.586955 | orchestrator | Friday 13 February 2026 04:47:15 +0000 (0:00:00.317) 0:00:06.330 ******* 2026-02-13 04:47:25.586961 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.586968 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.586992 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.586999 | orchestrator | 2026-02-13 04:47:25.587005 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-13 04:47:25.587011 | orchestrator | Friday 13 February 2026 04:47:15 +0000 (0:00:00.291) 0:00:06.621 ******* 2026-02-13 04:47:25.587018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-13 04:47:25.587025 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-13 04:47:25.587031 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587037 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-13 04:47:25.587043 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-13 04:47:25.587050 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:25.587056 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-13 04:47:25.587062 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-13 04:47:25.587068 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:25.587074 | orchestrator | 2026-02-13 04:47:25.587081 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-13 04:47:25.587087 | orchestrator | Friday 13 February 2026 04:47:15 +0000 (0:00:00.317) 0:00:06.939 ******* 2026-02-13 04:47:25.587093 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587099 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.587106 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.587112 | orchestrator | 2026-02-13 04:47:25.587118 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-13 04:47:25.587124 | orchestrator | Friday 13 February 2026 04:47:16 +0000 (0:00:00.493) 0:00:07.433 ******* 2026-02-13 04:47:25.587131 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587149 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:25.587156 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:25.587162 | orchestrator | 2026-02-13 04:47:25.587168 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-13 04:47:25.587174 | orchestrator | Friday 13 February 2026 04:47:16 +0000 (0:00:00.302) 0:00:07.735 ******* 2026-02-13 04:47:25.587182 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587189 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:25.587196 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:25.587203 | orchestrator | 2026-02-13 04:47:25.587210 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-13 04:47:25.587217 | orchestrator | Friday 13 February 2026 04:47:17 +0000 (0:00:00.313) 0:00:08.048 ******* 2026-02-13 04:47:25.587224 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587231 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.587238 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.587245 | orchestrator | 2026-02-13 04:47:25.587252 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:47:25.587259 | orchestrator | Friday 13 February 2026 04:47:17 +0000 (0:00:00.315) 0:00:08.364 ******* 2026-02-13 04:47:25.587266 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587274 | orchestrator | 2026-02-13 04:47:25.587302 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:47:25.587309 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.674) 0:00:09.038 ******* 2026-02-13 04:47:25.587315 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587322 | orchestrator | 2026-02-13 04:47:25.587328 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:47:25.587335 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.271) 0:00:09.310 ******* 2026-02-13 04:47:25.587341 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587347 | orchestrator | 2026-02-13 04:47:25.587353 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:25.587365 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.253) 0:00:09.563 ******* 2026-02-13 04:47:25.587372 | orchestrator | 2026-02-13 04:47:25.587378 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:25.587384 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.076) 0:00:09.640 ******* 2026-02-13 04:47:25.587391 | orchestrator | 2026-02-13 04:47:25.587397 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:25.587403 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.079) 0:00:09.719 ******* 2026-02-13 04:47:25.587409 | orchestrator | 2026-02-13 04:47:25.587416 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:47:25.587422 | orchestrator | Friday 13 February 2026 04:47:18 +0000 (0:00:00.075) 0:00:09.794 ******* 2026-02-13 04:47:25.587428 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587434 | orchestrator | 2026-02-13 04:47:25.587441 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-13 04:47:25.587447 | orchestrator | Friday 13 February 2026 04:47:19 +0000 (0:00:00.255) 0:00:10.049 ******* 2026-02-13 04:47:25.587453 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587459 | orchestrator | 2026-02-13 04:47:25.587466 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:47:25.587472 | orchestrator | Friday 13 February 2026 04:47:19 +0000 (0:00:00.265) 0:00:10.315 ******* 2026-02-13 04:47:25.587478 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587484 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.587490 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.587496 | orchestrator | 2026-02-13 04:47:25.587502 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-13 04:47:25.587509 | orchestrator | Friday 13 February 2026 04:47:19 +0000 (0:00:00.303) 0:00:10.619 ******* 2026-02-13 04:47:25.587515 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587521 | orchestrator | 2026-02-13 04:47:25.587527 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-13 04:47:25.587533 | orchestrator | Friday 13 February 2026 04:47:20 +0000 (0:00:00.709) 0:00:11.328 ******* 2026-02-13 04:47:25.587540 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 04:47:25.587546 | orchestrator | 2026-02-13 04:47:25.587552 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-13 04:47:25.587558 | orchestrator | Friday 13 February 2026 04:47:21 +0000 (0:00:01.628) 0:00:12.957 ******* 2026-02-13 04:47:25.587564 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587571 | orchestrator | 2026-02-13 04:47:25.587577 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-13 04:47:25.587583 | orchestrator | Friday 13 February 2026 04:47:22 +0000 (0:00:00.138) 0:00:13.095 ******* 2026-02-13 04:47:25.587589 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587595 | orchestrator | 2026-02-13 04:47:25.587601 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-13 04:47:25.587608 | orchestrator | Friday 13 February 2026 04:47:22 +0000 (0:00:00.313) 0:00:13.409 ******* 2026-02-13 04:47:25.587614 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:25.587620 | orchestrator | 2026-02-13 04:47:25.587626 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-13 04:47:25.587632 | orchestrator | Friday 13 February 2026 04:47:22 +0000 (0:00:00.124) 0:00:13.533 ******* 2026-02-13 04:47:25.587639 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587645 | orchestrator | 2026-02-13 04:47:25.587651 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:47:25.587657 | orchestrator | Friday 13 February 2026 04:47:22 +0000 (0:00:00.125) 0:00:13.659 ******* 2026-02-13 04:47:25.587663 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:25.587670 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:25.587676 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:25.587687 | orchestrator | 2026-02-13 04:47:25.587693 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-13 04:47:25.587699 | orchestrator | Friday 13 February 2026 04:47:22 +0000 (0:00:00.289) 0:00:13.948 ******* 2026-02-13 04:47:25.587706 | orchestrator | changed: [testbed-node-3] 2026-02-13 04:47:25.587712 | orchestrator | changed: [testbed-node-4] 2026-02-13 04:47:25.587718 | orchestrator | changed: [testbed-node-5] 2026-02-13 04:47:35.887601 | orchestrator | 2026-02-13 04:47:35.887747 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-13 04:47:35.887764 | orchestrator | Friday 13 February 2026 04:47:25 +0000 (0:00:02.593) 0:00:16.541 ******* 2026-02-13 04:47:35.887775 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.887786 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.887795 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.887805 | orchestrator | 2026-02-13 04:47:35.887815 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-13 04:47:35.887825 | orchestrator | Friday 13 February 2026 04:47:25 +0000 (0:00:00.311) 0:00:16.852 ******* 2026-02-13 04:47:35.887834 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.887844 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.887852 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.887860 | orchestrator | 2026-02-13 04:47:35.887868 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-13 04:47:35.887877 | orchestrator | Friday 13 February 2026 04:47:26 +0000 (0:00:00.510) 0:00:17.363 ******* 2026-02-13 04:47:35.887890 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:35.887903 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:35.887916 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:35.887929 | orchestrator | 2026-02-13 04:47:35.887940 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-13 04:47:35.887952 | orchestrator | Friday 13 February 2026 04:47:26 +0000 (0:00:00.334) 0:00:17.698 ******* 2026-02-13 04:47:35.887965 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.887978 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.887991 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.888003 | orchestrator | 2026-02-13 04:47:35.888016 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-13 04:47:35.888035 | orchestrator | Friday 13 February 2026 04:47:27 +0000 (0:00:00.527) 0:00:18.226 ******* 2026-02-13 04:47:35.888049 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:35.888062 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:35.888075 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:35.888087 | orchestrator | 2026-02-13 04:47:35.888099 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-13 04:47:35.888112 | orchestrator | Friday 13 February 2026 04:47:27 +0000 (0:00:00.312) 0:00:18.538 ******* 2026-02-13 04:47:35.888126 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:35.888139 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:35.888152 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:35.888166 | orchestrator | 2026-02-13 04:47:35.888180 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-13 04:47:35.888194 | orchestrator | Friday 13 February 2026 04:47:27 +0000 (0:00:00.303) 0:00:18.841 ******* 2026-02-13 04:47:35.888208 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.888222 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.888236 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.888250 | orchestrator | 2026-02-13 04:47:35.888266 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-13 04:47:35.888279 | orchestrator | Friday 13 February 2026 04:47:28 +0000 (0:00:00.556) 0:00:19.398 ******* 2026-02-13 04:47:35.888321 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.888335 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.888349 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.888363 | orchestrator | 2026-02-13 04:47:35.888378 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-13 04:47:35.888415 | orchestrator | Friday 13 February 2026 04:47:29 +0000 (0:00:00.767) 0:00:20.165 ******* 2026-02-13 04:47:35.888429 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.888443 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.888456 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.888467 | orchestrator | 2026-02-13 04:47:35.888480 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-13 04:47:35.888493 | orchestrator | Friday 13 February 2026 04:47:29 +0000 (0:00:00.347) 0:00:20.512 ******* 2026-02-13 04:47:35.888507 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:35.888521 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:47:35.888533 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:47:35.888546 | orchestrator | 2026-02-13 04:47:35.888559 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-13 04:47:35.888572 | orchestrator | Friday 13 February 2026 04:47:29 +0000 (0:00:00.315) 0:00:20.829 ******* 2026-02-13 04:47:35.888586 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:47:35.888599 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:47:35.888613 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:47:35.888627 | orchestrator | 2026-02-13 04:47:35.888641 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-13 04:47:35.888654 | orchestrator | Friday 13 February 2026 04:47:30 +0000 (0:00:00.538) 0:00:21.367 ******* 2026-02-13 04:47:35.888667 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:35.888681 | orchestrator | 2026-02-13 04:47:35.888695 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-13 04:47:35.888707 | orchestrator | Friday 13 February 2026 04:47:30 +0000 (0:00:00.264) 0:00:21.631 ******* 2026-02-13 04:47:35.888720 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:47:35.888733 | orchestrator | 2026-02-13 04:47:35.888745 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-13 04:47:35.888759 | orchestrator | Friday 13 February 2026 04:47:30 +0000 (0:00:00.275) 0:00:21.907 ******* 2026-02-13 04:47:35.888772 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:35.888785 | orchestrator | 2026-02-13 04:47:35.888798 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-13 04:47:35.888812 | orchestrator | Friday 13 February 2026 04:47:32 +0000 (0:00:01.673) 0:00:23.581 ******* 2026-02-13 04:47:35.888824 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:35.888836 | orchestrator | 2026-02-13 04:47:35.888850 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-13 04:47:35.888864 | orchestrator | Friday 13 February 2026 04:47:32 +0000 (0:00:00.277) 0:00:23.858 ******* 2026-02-13 04:47:35.888877 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:35.888890 | orchestrator | 2026-02-13 04:47:35.888926 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:35.888940 | orchestrator | Friday 13 February 2026 04:47:33 +0000 (0:00:00.261) 0:00:24.119 ******* 2026-02-13 04:47:35.888954 | orchestrator | 2026-02-13 04:47:35.888968 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:35.888980 | orchestrator | Friday 13 February 2026 04:47:33 +0000 (0:00:00.085) 0:00:24.205 ******* 2026-02-13 04:47:35.888991 | orchestrator | 2026-02-13 04:47:35.888999 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-13 04:47:35.889007 | orchestrator | Friday 13 February 2026 04:47:33 +0000 (0:00:00.067) 0:00:24.273 ******* 2026-02-13 04:47:35.889014 | orchestrator | 2026-02-13 04:47:35.889022 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-13 04:47:35.889030 | orchestrator | Friday 13 February 2026 04:47:33 +0000 (0:00:00.073) 0:00:24.346 ******* 2026-02-13 04:47:35.889037 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-13 04:47:35.889045 | orchestrator | 2026-02-13 04:47:35.889053 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-13 04:47:35.889069 | orchestrator | Friday 13 February 2026 04:47:34 +0000 (0:00:01.537) 0:00:25.884 ******* 2026-02-13 04:47:35.889077 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-13 04:47:35.889084 | orchestrator |  "msg": [ 2026-02-13 04:47:35.889093 | orchestrator |  "Validator run completed.", 2026-02-13 04:47:35.889101 | orchestrator |  "You can find the report file here:", 2026-02-13 04:47:35.889109 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-13T04:47:10+00:00-report.json", 2026-02-13 04:47:35.889124 | orchestrator |  "on the following host:", 2026-02-13 04:47:35.889132 | orchestrator |  "testbed-manager" 2026-02-13 04:47:35.889140 | orchestrator |  ] 2026-02-13 04:47:35.889149 | orchestrator | } 2026-02-13 04:47:35.889157 | orchestrator | 2026-02-13 04:47:35.889165 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:47:35.889174 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 04:47:35.889184 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-13 04:47:35.889192 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-13 04:47:35.889200 | orchestrator | 2026-02-13 04:47:35.889207 | orchestrator | 2026-02-13 04:47:35.889215 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:47:35.889223 | orchestrator | Friday 13 February 2026 04:47:35 +0000 (0:00:00.631) 0:00:26.515 ******* 2026-02-13 04:47:35.889231 | orchestrator | =============================================================================== 2026-02-13 04:47:35.889238 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.59s 2026-02-13 04:47:35.889246 | orchestrator | Aggregate test results step one ----------------------------------------- 1.67s 2026-02-13 04:47:35.889254 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.63s 2026-02-13 04:47:35.889262 | orchestrator | Write report file ------------------------------------------------------- 1.54s 2026-02-13 04:47:35.889269 | orchestrator | Get timestamp for report file ------------------------------------------- 0.85s 2026-02-13 04:47:35.889277 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.77s 2026-02-13 04:47:35.889312 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2026-02-13 04:47:35.889321 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.71s 2026-02-13 04:47:35.889329 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2026-02-13 04:47:35.889337 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2026-02-13 04:47:35.889344 | orchestrator | Print report file information ------------------------------------------- 0.63s 2026-02-13 04:47:35.889352 | orchestrator | Prepare test data ------------------------------------------------------- 0.56s 2026-02-13 04:47:35.889360 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2026-02-13 04:47:35.889368 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.53s 2026-02-13 04:47:35.889376 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.53s 2026-02-13 04:47:35.889383 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.53s 2026-02-13 04:47:35.889391 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.51s 2026-02-13 04:47:35.889399 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.49s 2026-02-13 04:47:35.889407 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.47s 2026-02-13 04:47:35.889414 | orchestrator | Calculate sub test expression results ----------------------------------- 0.35s 2026-02-13 04:47:36.246005 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-13 04:47:36.253674 | orchestrator | + set -e 2026-02-13 04:47:36.254002 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 04:47:36.254096 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 04:47:36.254121 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 04:47:36.254142 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 04:47:36.254162 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 04:47:36.254181 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 04:47:36.254203 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 04:47:36.254222 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:47:36.254241 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:47:36.254260 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 04:47:36.254280 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 04:47:36.254323 | orchestrator | ++ export ARA=false 2026-02-13 04:47:36.254342 | orchestrator | ++ ARA=false 2026-02-13 04:47:36.254361 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 04:47:36.254374 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 04:47:36.254385 | orchestrator | ++ export TEMPEST=false 2026-02-13 04:47:36.254395 | orchestrator | ++ TEMPEST=false 2026-02-13 04:47:36.254406 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 04:47:36.254417 | orchestrator | ++ IS_ZUUL=true 2026-02-13 04:47:36.254427 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:47:36.254439 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:47:36.254450 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 04:47:36.254460 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 04:47:36.254471 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 04:47:36.254482 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 04:47:36.254492 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 04:47:36.254503 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 04:47:36.254514 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 04:47:36.254526 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 04:47:36.254539 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-13 04:47:36.254550 | orchestrator | + source /etc/os-release 2026-02-13 04:47:36.254577 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-13 04:47:36.254589 | orchestrator | ++ NAME=Ubuntu 2026-02-13 04:47:36.254602 | orchestrator | ++ VERSION_ID=24.04 2026-02-13 04:47:36.254614 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-13 04:47:36.254626 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-13 04:47:36.254638 | orchestrator | ++ ID=ubuntu 2026-02-13 04:47:36.254650 | orchestrator | ++ ID_LIKE=debian 2026-02-13 04:47:36.254662 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-13 04:47:36.254674 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-13 04:47:36.254687 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-13 04:47:36.254700 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-13 04:47:36.254712 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-13 04:47:36.254724 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-13 04:47:36.254736 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-13 04:47:36.254749 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-13 04:47:36.254763 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-13 04:47:36.278893 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-13 04:47:58.457097 | orchestrator | 2026-02-13 04:47:58.457202 | orchestrator | # Status of Elasticsearch 2026-02-13 04:47:58.457218 | orchestrator | 2026-02-13 04:47:58.457230 | orchestrator | + pushd /opt/configuration/contrib 2026-02-13 04:47:58.457243 | orchestrator | + echo 2026-02-13 04:47:58.457255 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-13 04:47:58.457266 | orchestrator | + echo 2026-02-13 04:47:58.457277 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-13 04:47:58.646934 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-13 04:47:58.647024 | orchestrator | 2026-02-13 04:47:58.647033 | orchestrator | # Status of MariaDB 2026-02-13 04:47:58.647039 | orchestrator | 2026-02-13 04:47:58.647045 | orchestrator | + echo 2026-02-13 04:47:58.647071 | orchestrator | + echo '# Status of MariaDB' 2026-02-13 04:47:58.647076 | orchestrator | + echo 2026-02-13 04:47:58.648384 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-13 04:47:58.716421 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 04:47:58.716521 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-13 04:47:58.716533 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-13 04:47:58.716541 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-13 04:47:58.790694 | orchestrator | Reading package lists... 2026-02-13 04:47:59.135486 | orchestrator | Building dependency tree... 2026-02-13 04:47:59.135775 | orchestrator | Reading state information... 2026-02-13 04:47:59.497662 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-13 04:47:59.497775 | orchestrator | bc set to manually installed. 2026-02-13 04:47:59.497793 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-13 04:48:00.159578 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-13 04:48:00.160616 | orchestrator | 2026-02-13 04:48:00.160685 | orchestrator | # Status of Prometheus 2026-02-13 04:48:00.160708 | orchestrator | 2026-02-13 04:48:00.160721 | orchestrator | + echo 2026-02-13 04:48:00.160732 | orchestrator | + echo '# Status of Prometheus' 2026-02-13 04:48:00.160743 | orchestrator | + echo 2026-02-13 04:48:00.160755 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-13 04:48:00.242660 | orchestrator | Unauthorized 2026-02-13 04:48:00.247009 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-13 04:48:00.299781 | orchestrator | Unauthorized 2026-02-13 04:48:00.302120 | orchestrator | 2026-02-13 04:48:00.302216 | orchestrator | # Status of RabbitMQ 2026-02-13 04:48:00.302224 | orchestrator | 2026-02-13 04:48:00.302229 | orchestrator | + echo 2026-02-13 04:48:00.302234 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-13 04:48:00.302239 | orchestrator | + echo 2026-02-13 04:48:00.302248 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-13 04:48:00.346576 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 04:48:00.346643 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-13 04:48:00.346651 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-13 04:48:00.833974 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-13 04:48:00.843194 | orchestrator | 2026-02-13 04:48:00.843270 | orchestrator | # Status of Redis 2026-02-13 04:48:00.843281 | orchestrator | 2026-02-13 04:48:00.843357 | orchestrator | + echo 2026-02-13 04:48:00.843369 | orchestrator | + echo '# Status of Redis' 2026-02-13 04:48:00.843379 | orchestrator | + echo 2026-02-13 04:48:00.843390 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-13 04:48:00.850613 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.002261s;;;0.000000;10.000000 2026-02-13 04:48:00.851016 | orchestrator | 2026-02-13 04:48:00.851045 | orchestrator | # Create backup of MariaDB database 2026-02-13 04:48:00.851058 | orchestrator | 2026-02-13 04:48:00.851069 | orchestrator | + popd 2026-02-13 04:48:00.851081 | orchestrator | + echo 2026-02-13 04:48:00.851092 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-13 04:48:00.851103 | orchestrator | + echo 2026-02-13 04:48:00.851115 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-13 04:48:02.865514 | orchestrator | 2026-02-13 04:48:02 | INFO  | Task 8cde2fc5-7d85-4876-a1ab-e89d3fe99278 (mariadb_backup) was prepared for execution. 2026-02-13 04:48:02.865622 | orchestrator | 2026-02-13 04:48:02 | INFO  | It takes a moment until task 8cde2fc5-7d85-4876-a1ab-e89d3fe99278 (mariadb_backup) has been started and output is visible here. 2026-02-13 04:48:31.736694 | orchestrator | 2026-02-13 04:48:31.736808 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 04:48:31.736825 | orchestrator | 2026-02-13 04:48:31.736837 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 04:48:31.736856 | orchestrator | Friday 13 February 2026 04:48:06 +0000 (0:00:00.173) 0:00:00.173 ******* 2026-02-13 04:48:31.736876 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:48:31.736896 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:48:31.736916 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:48:31.736937 | orchestrator | 2026-02-13 04:48:31.736988 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 04:48:31.737003 | orchestrator | Friday 13 February 2026 04:48:07 +0000 (0:00:00.321) 0:00:00.495 ******* 2026-02-13 04:48:31.737014 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-13 04:48:31.737026 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-13 04:48:31.737036 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-13 04:48:31.737046 | orchestrator | 2026-02-13 04:48:31.737057 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-13 04:48:31.737068 | orchestrator | 2026-02-13 04:48:31.737079 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-13 04:48:31.737089 | orchestrator | Friday 13 February 2026 04:48:07 +0000 (0:00:00.560) 0:00:01.055 ******* 2026-02-13 04:48:31.737100 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 04:48:31.737111 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 04:48:31.737122 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 04:48:31.737132 | orchestrator | 2026-02-13 04:48:31.737143 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 04:48:31.737154 | orchestrator | Friday 13 February 2026 04:48:08 +0000 (0:00:00.426) 0:00:01.482 ******* 2026-02-13 04:48:31.737165 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 04:48:31.737178 | orchestrator | 2026-02-13 04:48:31.737189 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-13 04:48:31.737214 | orchestrator | Friday 13 February 2026 04:48:08 +0000 (0:00:00.573) 0:00:02.055 ******* 2026-02-13 04:48:31.737225 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:48:31.737238 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:48:31.737251 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:48:31.737263 | orchestrator | 2026-02-13 04:48:31.737276 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-13 04:48:31.737288 | orchestrator | Friday 13 February 2026 04:48:12 +0000 (0:00:03.203) 0:00:05.258 ******* 2026-02-13 04:48:31.737300 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-13 04:48:31.737347 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-13 04:48:31.737360 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-13 04:48:31.737372 | orchestrator | mariadb_bootstrap_restart 2026-02-13 04:48:31.737385 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:48:31.737395 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:48:31.737406 | orchestrator | changed: [testbed-node-0] 2026-02-13 04:48:31.737417 | orchestrator | 2026-02-13 04:48:31.737427 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-13 04:48:31.737438 | orchestrator | skipping: no hosts matched 2026-02-13 04:48:31.737449 | orchestrator | 2026-02-13 04:48:31.737459 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-13 04:48:31.737470 | orchestrator | skipping: no hosts matched 2026-02-13 04:48:31.737481 | orchestrator | 2026-02-13 04:48:31.737491 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-13 04:48:31.737502 | orchestrator | skipping: no hosts matched 2026-02-13 04:48:31.737513 | orchestrator | 2026-02-13 04:48:31.737523 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-13 04:48:31.737534 | orchestrator | 2026-02-13 04:48:31.737545 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-13 04:48:31.737556 | orchestrator | Friday 13 February 2026 04:48:30 +0000 (0:00:18.549) 0:00:23.808 ******* 2026-02-13 04:48:31.737566 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:48:31.737577 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:48:31.737588 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:48:31.737598 | orchestrator | 2026-02-13 04:48:31.737609 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-13 04:48:31.737628 | orchestrator | Friday 13 February 2026 04:48:30 +0000 (0:00:00.317) 0:00:24.126 ******* 2026-02-13 04:48:31.737639 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:48:31.737650 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:48:31.737660 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:48:31.737671 | orchestrator | 2026-02-13 04:48:31.737682 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:48:31.737694 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:48:31.737706 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 04:48:31.737718 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 04:48:31.737729 | orchestrator | 2026-02-13 04:48:31.737739 | orchestrator | 2026-02-13 04:48:31.737750 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:48:31.737761 | orchestrator | Friday 13 February 2026 04:48:31 +0000 (0:00:00.424) 0:00:24.551 ******* 2026-02-13 04:48:31.737772 | orchestrator | =============================================================================== 2026-02-13 04:48:31.737783 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.55s 2026-02-13 04:48:31.737812 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.20s 2026-02-13 04:48:31.737823 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.57s 2026-02-13 04:48:31.737835 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.56s 2026-02-13 04:48:31.737845 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2026-02-13 04:48:31.737856 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.42s 2026-02-13 04:48:31.737867 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-02-13 04:48:31.737877 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-02-13 04:48:32.091614 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-13 04:48:32.101598 | orchestrator | + set -e 2026-02-13 04:48:32.101689 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:48:32.101704 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:48:32.101725 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:48:32.101745 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:48:32.101764 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:48:32.101784 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-13 04:48:32.102962 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:48:32.113761 | orchestrator | 2026-02-13 04:48:32.113837 | orchestrator | # OpenStack endpoints 2026-02-13 04:48:32.113852 | orchestrator | 2026-02-13 04:48:32.113864 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:48:32.113876 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:48:32.113888 | orchestrator | + export OS_CLOUD=admin 2026-02-13 04:48:32.113899 | orchestrator | + OS_CLOUD=admin 2026-02-13 04:48:32.113910 | orchestrator | + echo 2026-02-13 04:48:32.113921 | orchestrator | + echo '# OpenStack endpoints' 2026-02-13 04:48:32.113932 | orchestrator | + echo 2026-02-13 04:48:32.113943 | orchestrator | + openstack endpoint list 2026-02-13 04:48:35.280408 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-13 04:48:35.280520 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-13 04:48:35.280536 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-13 04:48:35.280576 | orchestrator | | 14c1328c44c94a11ae10855a25a67c2f | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-13 04:48:35.280616 | orchestrator | | 173e246a4d1e4bd68b213f28ac700629 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-13 04:48:35.280639 | orchestrator | | 197cdf4f35fc43b98b86668397fd9419 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-13 04:48:35.280659 | orchestrator | | 1fa96dbe430248418035c310b96afd8d | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-13 04:48:35.280679 | orchestrator | | 220f0a955037489da5035123665633b4 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-13 04:48:35.280698 | orchestrator | | 22255e746b544240ad6d172707d2fc64 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-13 04:48:35.280716 | orchestrator | | 38bdd1838a7b471f8fff98f2912a8c45 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-13 04:48:35.280735 | orchestrator | | 5259ee1b2d8f4998948ca71069e43fe0 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-13 04:48:35.280754 | orchestrator | | 56e6a0f6223d4bf3b302ca7e10ba507a | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-13 04:48:35.280772 | orchestrator | | 57fbb209be164badaba37d9ed8603612 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-13 04:48:35.280791 | orchestrator | | 5ecb22b258b2439a8486250c234d7cb7 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-13 04:48:35.280810 | orchestrator | | 6964b4ec2d584b8a85a0c57373313225 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-13 04:48:35.280828 | orchestrator | | 821b429f584742d99553134f8d37685f | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-13 04:48:35.280847 | orchestrator | | 8fa148227ab9433f85656985ac080423 | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-13 04:48:35.280866 | orchestrator | | 9211ecc598f34ed49854c7e11dd42793 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-13 04:48:35.280884 | orchestrator | | 9485a1290af84433ae11bb52064c8827 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-13 04:48:35.280897 | orchestrator | | 999c72513a1549bd91f546d0adeded18 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-13 04:48:35.280909 | orchestrator | | a400dbe059ca495589bdbd16c7d8ee16 | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-13 04:48:35.280921 | orchestrator | | a9edf5f257ec451694be5252a2f91508 | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-13 04:48:35.280934 | orchestrator | | bcc9cd3042124235af505a7aadd1fbfd | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-13 04:48:35.280979 | orchestrator | | bf78a78649ac43d88b5940a423c22bd5 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-13 04:48:35.281013 | orchestrator | | c3122241c74b462c92cce2455ec7d4bf | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-13 04:48:35.281042 | orchestrator | | c6b81b5633bb40869637985b093ef2ea | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-13 04:48:35.281064 | orchestrator | | c723b0578a394aa894cfc2e2da32408a | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-13 04:48:35.281083 | orchestrator | | d0123e37afdd4e699d8134d4097778a4 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-13 04:48:35.281101 | orchestrator | | d610ef6ecb524a3290bf3c7cad49bd0f | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-13 04:48:35.281121 | orchestrator | | df5cc68720dd46559ef6770117980f2a | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-13 04:48:35.281139 | orchestrator | | e3947ca8634d42d0adb03ed76d0a6769 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-13 04:48:35.281157 | orchestrator | | e79d3aa1b99544cab40bafa6dc12fd88 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-13 04:48:35.281176 | orchestrator | | e7fe729d72c24bdc9564b0b29e608fe7 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-13 04:48:35.281195 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-13 04:48:35.540833 | orchestrator | 2026-02-13 04:48:35.540929 | orchestrator | # Cinder 2026-02-13 04:48:35.540940 | orchestrator | 2026-02-13 04:48:35.540948 | orchestrator | + echo 2026-02-13 04:48:35.540968 | orchestrator | + echo '# Cinder' 2026-02-13 04:48:35.540981 | orchestrator | + echo 2026-02-13 04:48:35.540993 | orchestrator | + openstack volume service list 2026-02-13 04:48:38.180138 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:38.180228 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-13 04:48:38.180240 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:38.180249 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:38.180258 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:38.180267 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:38.180276 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:38.180285 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-13T04:48:34.000000 | 2026-02-13 04:48:38.180293 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-13T04:48:28.000000 | 2026-02-13 04:48:38.180302 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-13T04:48:31.000000 | 2026-02-13 04:48:38.180423 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-13T04:48:33.000000 | 2026-02-13 04:48:38.180432 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-13T04:48:34.000000 | 2026-02-13 04:48:38.180466 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:38.446136 | orchestrator | 2026-02-13 04:48:38.446229 | orchestrator | # Neutron 2026-02-13 04:48:38.446243 | orchestrator | 2026-02-13 04:48:38.446254 | orchestrator | + echo 2026-02-13 04:48:38.446264 | orchestrator | + echo '# Neutron' 2026-02-13 04:48:38.446275 | orchestrator | + echo 2026-02-13 04:48:38.446285 | orchestrator | + openstack network agent list 2026-02-13 04:48:41.034565 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-13 04:48:41.034712 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-13 04:48:41.034736 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-13 04:48:41.034754 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034773 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034791 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034809 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034852 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034873 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-13 04:48:41.034890 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-13 04:48:41.034908 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-13 04:48:41.034925 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-13 04:48:41.034943 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-13 04:48:41.306389 | orchestrator | + openstack network service provider list 2026-02-13 04:48:43.843023 | orchestrator | +---------------+------+---------+ 2026-02-13 04:48:43.843173 | orchestrator | | Service Type | Name | Default | 2026-02-13 04:48:43.843198 | orchestrator | +---------------+------+---------+ 2026-02-13 04:48:43.843219 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-13 04:48:43.843236 | orchestrator | +---------------+------+---------+ 2026-02-13 04:48:44.110783 | orchestrator | 2026-02-13 04:48:44.110882 | orchestrator | # Nova 2026-02-13 04:48:44.110897 | orchestrator | 2026-02-13 04:48:44.110908 | orchestrator | + echo 2026-02-13 04:48:44.110920 | orchestrator | + echo '# Nova' 2026-02-13 04:48:44.110932 | orchestrator | + echo 2026-02-13 04:48:44.110944 | orchestrator | + openstack compute service list 2026-02-13 04:48:47.268714 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:47.268820 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-13 04:48:47.268832 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:47.268841 | orchestrator | | 74f157dc-e82c-4156-8445-9109694a1172 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-13T04:48:45.000000 | 2026-02-13 04:48:47.268877 | orchestrator | | 8571e566-01c4-416b-b4e8-fd6de1d7b12e | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-13T04:48:40.000000 | 2026-02-13 04:48:47.268887 | orchestrator | | 0acb445a-7eca-4b0b-8bbc-2cb6ccf16a8e | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-13T04:48:44.000000 | 2026-02-13 04:48:47.268897 | orchestrator | | 9ce28ca5-7cb5-44d0-aea0-68b558e8f667 | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-13T04:48:43.000000 | 2026-02-13 04:48:47.268906 | orchestrator | | fefd3ec7-b5a8-422f-8b41-443b9574c011 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-13T04:48:46.000000 | 2026-02-13 04:48:47.268916 | orchestrator | | 46ebc61c-5a0a-4ad0-a2bf-490f477650e5 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-13T04:48:46.000000 | 2026-02-13 04:48:47.268925 | orchestrator | | 176b1794-90d7-4a7e-b269-a00d76991932 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:47.268934 | orchestrator | | 5b753ae2-3b41-41f3-a598-5c785689f41a | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-13T04:48:37.000000 | 2026-02-13 04:48:47.268944 | orchestrator | | a338c257-5b48-4fd1-b738-44bd4d4c2386 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-13T04:48:38.000000 | 2026-02-13 04:48:47.268953 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-13 04:48:47.540785 | orchestrator | + openstack hypervisor list 2026-02-13 04:48:50.772821 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-13 04:48:50.772911 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-13 04:48:50.772921 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-13 04:48:50.772928 | orchestrator | | 121f261e-e1a3-466d-a7fd-c61d93cb6d41 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-13 04:48:50.772934 | orchestrator | | 28b7c8ec-3176-4369-b05d-ae09f3ea9118 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-13 04:48:50.772941 | orchestrator | | b02324b1-0f56-4f9f-aa81-fc6797c2624d | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-13 04:48:50.772947 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-13 04:48:51.037285 | orchestrator | 2026-02-13 04:48:51.037433 | orchestrator | # Run OpenStack test play 2026-02-13 04:48:51.037448 | orchestrator | 2026-02-13 04:48:51.037461 | orchestrator | + echo 2026-02-13 04:48:51.037472 | orchestrator | + echo '# Run OpenStack test play' 2026-02-13 04:48:51.037483 | orchestrator | + echo 2026-02-13 04:48:51.037494 | orchestrator | + osism apply --environment openstack test 2026-02-13 04:48:52.949764 | orchestrator | 2026-02-13 04:48:52 | INFO  | Trying to run play test in environment openstack 2026-02-13 04:49:03.025851 | orchestrator | 2026-02-13 04:49:03 | INFO  | Task 9ba71b6b-dccb-49e5-aac6-6514d5d771c8 (test) was prepared for execution. 2026-02-13 04:49:03.025985 | orchestrator | 2026-02-13 04:49:03 | INFO  | It takes a moment until task 9ba71b6b-dccb-49e5-aac6-6514d5d771c8 (test) has been started and output is visible here. 2026-02-13 04:51:48.407974 | orchestrator | 2026-02-13 04:51:48.408091 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-13 04:51:48.408107 | orchestrator | 2026-02-13 04:51:48.408119 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-13 04:51:48.408131 | orchestrator | Friday 13 February 2026 04:49:07 +0000 (0:00:00.069) 0:00:00.069 ******* 2026-02-13 04:51:48.408143 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408155 | orchestrator | 2026-02-13 04:51:48.408166 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-13 04:51:48.408177 | orchestrator | Friday 13 February 2026 04:49:10 +0000 (0:00:03.637) 0:00:03.707 ******* 2026-02-13 04:51:48.408188 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408199 | orchestrator | 2026-02-13 04:51:48.408234 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-13 04:51:48.408246 | orchestrator | Friday 13 February 2026 04:49:15 +0000 (0:00:04.084) 0:00:07.791 ******* 2026-02-13 04:51:48.408304 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408317 | orchestrator | 2026-02-13 04:51:48.408328 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-13 04:51:48.408339 | orchestrator | Friday 13 February 2026 04:49:21 +0000 (0:00:06.583) 0:00:14.375 ******* 2026-02-13 04:51:48.408350 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408361 | orchestrator | 2026-02-13 04:51:48.408407 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-13 04:51:48.408425 | orchestrator | Friday 13 February 2026 04:49:25 +0000 (0:00:04.024) 0:00:18.399 ******* 2026-02-13 04:51:48.408444 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408465 | orchestrator | 2026-02-13 04:51:48.408484 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-13 04:51:48.408503 | orchestrator | Friday 13 February 2026 04:49:29 +0000 (0:00:04.139) 0:00:22.539 ******* 2026-02-13 04:51:48.408520 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-13 04:51:48.408533 | orchestrator | changed: [localhost] => (item=member) 2026-02-13 04:51:48.408547 | orchestrator | changed: [localhost] => (item=creator) 2026-02-13 04:51:48.408559 | orchestrator | 2026-02-13 04:51:48.408571 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-13 04:51:48.408584 | orchestrator | Friday 13 February 2026 04:49:41 +0000 (0:00:11.413) 0:00:33.953 ******* 2026-02-13 04:51:48.408596 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408629 | orchestrator | 2026-02-13 04:51:48.408661 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-13 04:51:48.408679 | orchestrator | Friday 13 February 2026 04:49:45 +0000 (0:00:04.175) 0:00:38.128 ******* 2026-02-13 04:51:48.408698 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408716 | orchestrator | 2026-02-13 04:51:48.408734 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-13 04:51:48.408751 | orchestrator | Friday 13 February 2026 04:49:50 +0000 (0:00:04.813) 0:00:42.942 ******* 2026-02-13 04:51:48.408769 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408788 | orchestrator | 2026-02-13 04:51:48.408805 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-13 04:51:48.408824 | orchestrator | Friday 13 February 2026 04:49:54 +0000 (0:00:04.174) 0:00:47.116 ******* 2026-02-13 04:51:48.408842 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408859 | orchestrator | 2026-02-13 04:51:48.408876 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-13 04:51:48.408894 | orchestrator | Friday 13 February 2026 04:49:58 +0000 (0:00:04.085) 0:00:51.202 ******* 2026-02-13 04:51:48.408912 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.408927 | orchestrator | 2026-02-13 04:51:48.408944 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-13 04:51:48.408963 | orchestrator | Friday 13 February 2026 04:50:02 +0000 (0:00:04.118) 0:00:55.321 ******* 2026-02-13 04:51:48.408982 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.409000 | orchestrator | 2026-02-13 04:51:48.409019 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-13 04:51:48.409037 | orchestrator | Friday 13 February 2026 04:50:06 +0000 (0:00:03.792) 0:00:59.113 ******* 2026-02-13 04:51:48.409055 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.409073 | orchestrator | 2026-02-13 04:51:48.409094 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-13 04:51:48.409113 | orchestrator | Friday 13 February 2026 04:50:11 +0000 (0:00:04.971) 0:01:04.084 ******* 2026-02-13 04:51:48.409132 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.409144 | orchestrator | 2026-02-13 04:51:48.409155 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-13 04:51:48.409167 | orchestrator | Friday 13 February 2026 04:50:16 +0000 (0:00:05.499) 0:01:09.584 ******* 2026-02-13 04:51:48.409192 | orchestrator | changed: [localhost] 2026-02-13 04:51:48.409203 | orchestrator | 2026-02-13 04:51:48.409215 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-13 04:51:48.409226 | orchestrator | 2026-02-13 04:51:48.409237 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-13 04:51:48.409247 | orchestrator | Friday 13 February 2026 04:50:28 +0000 (0:00:11.303) 0:01:20.887 ******* 2026-02-13 04:51:48.409259 | orchestrator | ok: [localhost] 2026-02-13 04:51:48.409270 | orchestrator | 2026-02-13 04:51:48.409282 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-13 04:51:48.409293 | orchestrator | Friday 13 February 2026 04:50:31 +0000 (0:00:03.738) 0:01:24.626 ******* 2026-02-13 04:51:48.409304 | orchestrator | skipping: [localhost] 2026-02-13 04:51:48.409315 | orchestrator | 2026-02-13 04:51:48.409326 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-13 04:51:48.409336 | orchestrator | Friday 13 February 2026 04:50:31 +0000 (0:00:00.053) 0:01:24.679 ******* 2026-02-13 04:51:48.409347 | orchestrator | skipping: [localhost] 2026-02-13 04:51:48.409359 | orchestrator | 2026-02-13 04:51:48.409408 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-13 04:51:48.409428 | orchestrator | Friday 13 February 2026 04:50:31 +0000 (0:00:00.051) 0:01:24.731 ******* 2026-02-13 04:51:48.409465 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-13 04:51:48.409483 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-13 04:51:48.409518 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-13 04:51:48.409530 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-13 04:51:48.409541 | orchestrator | skipping: [localhost] => (item=test)  2026-02-13 04:51:48.409553 | orchestrator | skipping: [localhost] 2026-02-13 04:51:48.409564 | orchestrator | 2026-02-13 04:51:48.409575 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-13 04:51:48.409586 | orchestrator | Friday 13 February 2026 04:50:32 +0000 (0:00:00.156) 0:01:24.887 ******* 2026-02-13 04:51:48.409597 | orchestrator | skipping: [localhost] 2026-02-13 04:51:48.409607 | orchestrator | 2026-02-13 04:51:48.409618 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-13 04:51:48.409629 | orchestrator | Friday 13 February 2026 04:50:32 +0000 (0:00:00.147) 0:01:25.034 ******* 2026-02-13 04:51:48.409640 | orchestrator | changed: [localhost] => (item=test) 2026-02-13 04:51:48.409651 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-13 04:51:48.409662 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-13 04:51:48.409673 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-13 04:51:48.409684 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-13 04:51:48.409695 | orchestrator | 2026-02-13 04:51:48.409706 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-13 04:51:48.409717 | orchestrator | Friday 13 February 2026 04:50:37 +0000 (0:00:04.917) 0:01:29.951 ******* 2026-02-13 04:51:48.409728 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-13 04:51:48.409740 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-13 04:51:48.409751 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-13 04:51:48.409762 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-13 04:51:48.409775 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j730839421825.3658', 'results_file': '/ansible/.ansible_async/j730839421825.3658', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.409789 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-13 04:51:48.409800 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j776479718659.3683', 'results_file': '/ansible/.ansible_async/j776479718659.3683', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.409822 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j210124754775.3708', 'results_file': '/ansible/.ansible_async/j210124754775.3708', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.409834 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j674519370389.3733', 'results_file': '/ansible/.ansible_async/j674519370389.3733', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.409845 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j626252007983.3758', 'results_file': '/ansible/.ansible_async/j626252007983.3758', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.409856 | orchestrator | 2026-02-13 04:51:48.409867 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-13 04:51:48.409879 | orchestrator | Friday 13 February 2026 04:51:34 +0000 (0:00:57.298) 0:02:27.250 ******* 2026-02-13 04:51:48.409890 | orchestrator | changed: [localhost] => (item=test) 2026-02-13 04:51:48.409901 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-13 04:51:48.409912 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-13 04:51:48.409923 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-13 04:51:48.409934 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-13 04:51:48.409945 | orchestrator | 2026-02-13 04:51:48.409956 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-13 04:51:48.409967 | orchestrator | Friday 13 February 2026 04:51:38 +0000 (0:00:04.499) 0:02:31.749 ******* 2026-02-13 04:51:48.409978 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-13 04:51:48.409990 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j361442099156.3869', 'results_file': '/ansible/.ansible_async/j361442099156.3869', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.410002 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j838639324113.3894', 'results_file': '/ansible/.ansible_async/j838639324113.3894', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.410014 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j401380702545.3919', 'results_file': '/ansible/.ansible_async/j401380702545.3919', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-13 04:51:48.410114 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j44606953474.3944', 'results_file': '/ansible/.ansible_async/j44606953474.3944', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743225 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j462422559024.3969', 'results_file': '/ansible/.ansible_async/j462422559024.3969', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743347 | orchestrator | 2026-02-13 04:52:28.743367 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-13 04:52:28.743430 | orchestrator | Friday 13 February 2026 04:51:48 +0000 (0:00:09.421) 0:02:41.171 ******* 2026-02-13 04:52:28.743443 | orchestrator | changed: [localhost] => (item=test) 2026-02-13 04:52:28.743456 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-13 04:52:28.743468 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-13 04:52:28.743480 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-13 04:52:28.743491 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-13 04:52:28.743504 | orchestrator | 2026-02-13 04:52:28.743542 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-13 04:52:28.743555 | orchestrator | Friday 13 February 2026 04:51:53 +0000 (0:00:04.930) 0:02:46.102 ******* 2026-02-13 04:52:28.743567 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-13 04:52:28.743581 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j10479894964.4038', 'results_file': '/ansible/.ansible_async/j10479894964.4038', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743595 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j744269158976.4063', 'results_file': '/ansible/.ansible_async/j744269158976.4063', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743606 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j466138301592.4096', 'results_file': '/ansible/.ansible_async/j466138301592.4096', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743619 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j521684272155.4122', 'results_file': '/ansible/.ansible_async/j521684272155.4122', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743630 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j697746106497.4148', 'results_file': '/ansible/.ansible_async/j697746106497.4148', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-13 04:52:28.743642 | orchestrator | 2026-02-13 04:52:28.743655 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-13 04:52:28.743666 | orchestrator | Friday 13 February 2026 04:52:03 +0000 (0:00:09.991) 0:02:56.093 ******* 2026-02-13 04:52:28.743678 | orchestrator | changed: [localhost] 2026-02-13 04:52:28.743691 | orchestrator | 2026-02-13 04:52:28.743703 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-13 04:52:28.743716 | orchestrator | Friday 13 February 2026 04:52:09 +0000 (0:00:06.518) 0:03:02.612 ******* 2026-02-13 04:52:28.743729 | orchestrator | changed: [localhost] 2026-02-13 04:52:28.743742 | orchestrator | 2026-02-13 04:52:28.743754 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-13 04:52:28.743767 | orchestrator | Friday 13 February 2026 04:52:23 +0000 (0:00:13.347) 0:03:15.959 ******* 2026-02-13 04:52:28.743775 | orchestrator | ok: [localhost] 2026-02-13 04:52:28.743784 | orchestrator | 2026-02-13 04:52:28.743792 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-13 04:52:28.743800 | orchestrator | Friday 13 February 2026 04:52:28 +0000 (0:00:05.212) 0:03:21.171 ******* 2026-02-13 04:52:28.743808 | orchestrator | ok: [localhost] => { 2026-02-13 04:52:28.743816 | orchestrator |  "msg": "192.168.112.193" 2026-02-13 04:52:28.743825 | orchestrator | } 2026-02-13 04:52:28.743833 | orchestrator | 2026-02-13 04:52:28.743841 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:52:28.743851 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 04:52:28.743860 | orchestrator | 2026-02-13 04:52:28.743868 | orchestrator | 2026-02-13 04:52:28.743876 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:52:28.743885 | orchestrator | Friday 13 February 2026 04:52:28 +0000 (0:00:00.044) 0:03:21.216 ******* 2026-02-13 04:52:28.743893 | orchestrator | =============================================================================== 2026-02-13 04:52:28.743901 | orchestrator | Wait for instance creation to complete --------------------------------- 57.30s 2026-02-13 04:52:28.743909 | orchestrator | Attach test volume ----------------------------------------------------- 13.35s 2026-02-13 04:52:28.743918 | orchestrator | Add member roles to user test ------------------------------------------ 11.41s 2026-02-13 04:52:28.743945 | orchestrator | Create test router ----------------------------------------------------- 11.30s 2026-02-13 04:52:28.743953 | orchestrator | Wait for tags to be added ----------------------------------------------- 9.99s 2026-02-13 04:52:28.743960 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.42s 2026-02-13 04:52:28.743967 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.58s 2026-02-13 04:52:28.743992 | orchestrator | Create test volume ------------------------------------------------------ 6.52s 2026-02-13 04:52:28.744000 | orchestrator | Create test subnet ------------------------------------------------------ 5.50s 2026-02-13 04:52:28.744007 | orchestrator | Create floating ip address ---------------------------------------------- 5.21s 2026-02-13 04:52:28.744014 | orchestrator | Create test network ----------------------------------------------------- 4.97s 2026-02-13 04:52:28.744021 | orchestrator | Add tag to instances ---------------------------------------------------- 4.93s 2026-02-13 04:52:28.744028 | orchestrator | Create test instances --------------------------------------------------- 4.92s 2026-02-13 04:52:28.744035 | orchestrator | Create ssh security group ----------------------------------------------- 4.81s 2026-02-13 04:52:28.744042 | orchestrator | Add metadata to instances ----------------------------------------------- 4.50s 2026-02-13 04:52:28.744049 | orchestrator | Create test server group ------------------------------------------------ 4.18s 2026-02-13 04:52:28.744056 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.17s 2026-02-13 04:52:28.744063 | orchestrator | Create test user -------------------------------------------------------- 4.14s 2026-02-13 04:52:28.744070 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.12s 2026-02-13 04:52:28.744078 | orchestrator | Create icmp security group ---------------------------------------------- 4.09s 2026-02-13 04:52:29.055507 | orchestrator | + server_list 2026-02-13 04:52:29.055603 | orchestrator | + openstack --os-cloud test server list 2026-02-13 04:52:32.914537 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-13 04:52:32.914612 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-13 04:52:32.914618 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-13 04:52:32.914622 | orchestrator | | 754966ce-5a10-4b32-bc15-7af068b34d61 | test-4 | ACTIVE | test=192.168.112.118, 192.168.200.91 | N/A (booted from volume) | SCS-1L-1 | 2026-02-13 04:52:32.914626 | orchestrator | | 4ac62101-9d6d-44c9-8da8-78c2704e7e9b | test-2 | ACTIVE | test=192.168.112.185, 192.168.200.7 | N/A (booted from volume) | SCS-1L-1 | 2026-02-13 04:52:32.914630 | orchestrator | | b247a693-bc2a-44d6-9294-c5bc3272edb3 | test-3 | ACTIVE | test=192.168.112.130, 192.168.200.230 | N/A (booted from volume) | SCS-1L-1 | 2026-02-13 04:52:32.914634 | orchestrator | | af3edd63-7e7e-402e-a725-0120c7cbc736 | test-1 | ACTIVE | test=192.168.112.102, 192.168.200.141 | N/A (booted from volume) | SCS-1L-1 | 2026-02-13 04:52:32.914638 | orchestrator | | b72d69a1-7c7a-4d22-963d-33ce655ad601 | test | ACTIVE | test=192.168.112.193, 192.168.200.43 | N/A (booted from volume) | SCS-1L-1 | 2026-02-13 04:52:32.914642 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-13 04:52:33.178102 | orchestrator | + openstack --os-cloud test server show test 2026-02-13 04:52:36.372724 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:36.372864 | orchestrator | | Field | Value | 2026-02-13 04:52:36.372918 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:36.372945 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-13 04:52:36.372965 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-13 04:52:36.372983 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-13 04:52:36.373001 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-13 04:52:36.373020 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-13 04:52:36.373038 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-13 04:52:36.373078 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-13 04:52:36.373098 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-13 04:52:36.373163 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-13 04:52:36.373183 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-13 04:52:36.373217 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-13 04:52:36.373237 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-13 04:52:36.373255 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-13 04:52:36.373274 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-13 04:52:36.373293 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-13 04:52:36.373311 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-13T04:51:08.000000 | 2026-02-13 04:52:36.373339 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-13 04:52:36.373404 | orchestrator | | accessIPv4 | | 2026-02-13 04:52:36.373427 | orchestrator | | accessIPv6 | | 2026-02-13 04:52:36.373445 | orchestrator | | addresses | test=192.168.112.193, 192.168.200.43 | 2026-02-13 04:52:36.373470 | orchestrator | | config_drive | | 2026-02-13 04:52:36.373488 | orchestrator | | created | 2026-02-13T04:50:41Z | 2026-02-13 04:52:36.373506 | orchestrator | | description | None | 2026-02-13 04:52:36.373526 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-13 04:52:36.373545 | orchestrator | | hostId | 499b1e7837b3ea7023daad013e6b48dd2486a1ef6d3dab94d87dc785 | 2026-02-13 04:52:36.373563 | orchestrator | | host_status | None | 2026-02-13 04:52:36.373603 | orchestrator | | id | b72d69a1-7c7a-4d22-963d-33ce655ad601 | 2026-02-13 04:52:36.373624 | orchestrator | | image | N/A (booted from volume) | 2026-02-13 04:52:36.373641 | orchestrator | | key_name | test | 2026-02-13 04:52:36.373659 | orchestrator | | locked | False | 2026-02-13 04:52:36.373677 | orchestrator | | locked_reason | None | 2026-02-13 04:52:36.373693 | orchestrator | | name | test | 2026-02-13 04:52:36.373711 | orchestrator | | pinned_availability_zone | None | 2026-02-13 04:52:36.373729 | orchestrator | | progress | 0 | 2026-02-13 04:52:36.373747 | orchestrator | | project_id | a61aed73e07548fa9afd58e7f65b79e9 | 2026-02-13 04:52:36.373764 | orchestrator | | properties | hostname='test' | 2026-02-13 04:52:36.373809 | orchestrator | | security_groups | name='ssh' | 2026-02-13 04:52:36.373827 | orchestrator | | | name='icmp' | 2026-02-13 04:52:36.373845 | orchestrator | | server_groups | None | 2026-02-13 04:52:36.373862 | orchestrator | | status | ACTIVE | 2026-02-13 04:52:36.373885 | orchestrator | | tags | test | 2026-02-13 04:52:36.373902 | orchestrator | | trusted_image_certificates | None | 2026-02-13 04:52:36.373920 | orchestrator | | updated | 2026-02-13T04:51:40Z | 2026-02-13 04:52:36.373937 | orchestrator | | user_id | 52075b7a1e8340a6a7af2c2ed40e4efa | 2026-02-13 04:52:36.373954 | orchestrator | | volumes_attached | delete_on_termination='True', id='1e88a2ac-1da2-4a4e-bad0-85a3398159ca' | 2026-02-13 04:52:36.373980 | orchestrator | | | delete_on_termination='False', id='785b9c19-55a3-4b21-bfc7-ca24661c7ef2' | 2026-02-13 04:52:36.379886 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:36.652571 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-13 04:52:39.742641 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:39.742748 | orchestrator | | Field | Value | 2026-02-13 04:52:39.742775 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:39.742781 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-13 04:52:39.742786 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-13 04:52:39.742792 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-13 04:52:39.742797 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-13 04:52:39.742822 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-13 04:52:39.742828 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-13 04:52:39.742846 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-13 04:52:39.742852 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-13 04:52:39.742857 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-13 04:52:39.742865 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-13 04:52:39.742874 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-13 04:52:39.742882 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-13 04:52:39.742890 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-13 04:52:39.742906 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-13 04:52:39.742914 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-13 04:52:39.742922 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-13T04:51:10.000000 | 2026-02-13 04:52:39.742935 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-13 04:52:39.742943 | orchestrator | | accessIPv4 | | 2026-02-13 04:52:39.742951 | orchestrator | | accessIPv6 | | 2026-02-13 04:52:39.742964 | orchestrator | | addresses | test=192.168.112.102, 192.168.200.141 | 2026-02-13 04:52:39.742974 | orchestrator | | config_drive | | 2026-02-13 04:52:39.742982 | orchestrator | | created | 2026-02-13T04:50:42Z | 2026-02-13 04:52:39.742996 | orchestrator | | description | None | 2026-02-13 04:52:39.743005 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-13 04:52:39.743024 | orchestrator | | hostId | 499b1e7837b3ea7023daad013e6b48dd2486a1ef6d3dab94d87dc785 | 2026-02-13 04:52:39.743029 | orchestrator | | host_status | None | 2026-02-13 04:52:39.743040 | orchestrator | | id | af3edd63-7e7e-402e-a725-0120c7cbc736 | 2026-02-13 04:52:39.743045 | orchestrator | | image | N/A (booted from volume) | 2026-02-13 04:52:39.743050 | orchestrator | | key_name | test | 2026-02-13 04:52:39.743066 | orchestrator | | locked | False | 2026-02-13 04:52:39.743071 | orchestrator | | locked_reason | None | 2026-02-13 04:52:39.743076 | orchestrator | | name | test-1 | 2026-02-13 04:52:39.743088 | orchestrator | | pinned_availability_zone | None | 2026-02-13 04:52:39.743097 | orchestrator | | progress | 0 | 2026-02-13 04:52:39.743105 | orchestrator | | project_id | a61aed73e07548fa9afd58e7f65b79e9 | 2026-02-13 04:52:39.743113 | orchestrator | | properties | hostname='test-1' | 2026-02-13 04:52:39.743127 | orchestrator | | security_groups | name='ssh' | 2026-02-13 04:52:39.743137 | orchestrator | | | name='icmp' | 2026-02-13 04:52:39.743145 | orchestrator | | server_groups | None | 2026-02-13 04:52:39.743153 | orchestrator | | status | ACTIVE | 2026-02-13 04:52:39.743159 | orchestrator | | tags | test | 2026-02-13 04:52:39.743172 | orchestrator | | trusted_image_certificates | None | 2026-02-13 04:52:39.743180 | orchestrator | | updated | 2026-02-13T04:51:40Z | 2026-02-13 04:52:39.743189 | orchestrator | | user_id | 52075b7a1e8340a6a7af2c2ed40e4efa | 2026-02-13 04:52:39.743198 | orchestrator | | volumes_attached | delete_on_termination='True', id='ab75e3c4-e643-426e-88d6-caa4e3b2ea72' | 2026-02-13 04:52:39.747033 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:40.015184 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-13 04:52:42.882499 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:42.882608 | orchestrator | | Field | Value | 2026-02-13 04:52:42.882643 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:42.882661 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-13 04:52:42.882698 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-13 04:52:42.882711 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-13 04:52:42.882722 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-13 04:52:42.882733 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-13 04:52:42.882744 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-13 04:52:42.882774 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-13 04:52:42.882786 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-13 04:52:42.882797 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-13 04:52:42.882808 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-13 04:52:42.882836 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-13 04:52:42.882847 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-13 04:52:42.882859 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-13 04:52:42.882870 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-13 04:52:42.882881 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-13 04:52:42.882892 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-13T04:51:12.000000 | 2026-02-13 04:52:42.882911 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-13 04:52:42.882923 | orchestrator | | accessIPv4 | | 2026-02-13 04:52:42.882934 | orchestrator | | accessIPv6 | | 2026-02-13 04:52:42.882950 | orchestrator | | addresses | test=192.168.112.185, 192.168.200.7 | 2026-02-13 04:52:42.882968 | orchestrator | | config_drive | | 2026-02-13 04:52:42.882980 | orchestrator | | created | 2026-02-13T04:50:43Z | 2026-02-13 04:52:42.882991 | orchestrator | | description | None | 2026-02-13 04:52:42.883005 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-13 04:52:42.883017 | orchestrator | | hostId | 499b1e7837b3ea7023daad013e6b48dd2486a1ef6d3dab94d87dc785 | 2026-02-13 04:52:42.883030 | orchestrator | | host_status | None | 2026-02-13 04:52:42.883050 | orchestrator | | id | 4ac62101-9d6d-44c9-8da8-78c2704e7e9b | 2026-02-13 04:52:42.883064 | orchestrator | | image | N/A (booted from volume) | 2026-02-13 04:52:42.883076 | orchestrator | | key_name | test | 2026-02-13 04:52:42.883101 | orchestrator | | locked | False | 2026-02-13 04:52:42.883114 | orchestrator | | locked_reason | None | 2026-02-13 04:52:42.883125 | orchestrator | | name | test-2 | 2026-02-13 04:52:42.883136 | orchestrator | | pinned_availability_zone | None | 2026-02-13 04:52:42.883147 | orchestrator | | progress | 0 | 2026-02-13 04:52:42.883159 | orchestrator | | project_id | a61aed73e07548fa9afd58e7f65b79e9 | 2026-02-13 04:52:42.883170 | orchestrator | | properties | hostname='test-2' | 2026-02-13 04:52:42.883189 | orchestrator | | security_groups | name='ssh' | 2026-02-13 04:52:42.883201 | orchestrator | | | name='icmp' | 2026-02-13 04:52:42.883219 | orchestrator | | server_groups | None | 2026-02-13 04:52:42.883235 | orchestrator | | status | ACTIVE | 2026-02-13 04:52:42.883246 | orchestrator | | tags | test | 2026-02-13 04:52:42.883258 | orchestrator | | trusted_image_certificates | None | 2026-02-13 04:52:42.883269 | orchestrator | | updated | 2026-02-13T04:51:41Z | 2026-02-13 04:52:42.883280 | orchestrator | | user_id | 52075b7a1e8340a6a7af2c2ed40e4efa | 2026-02-13 04:52:42.883291 | orchestrator | | volumes_attached | delete_on_termination='True', id='8c5a5487-5dec-4ee8-936b-c5640b5f1048' | 2026-02-13 04:52:42.884880 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:43.132537 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-13 04:52:46.123026 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:46.123137 | orchestrator | | Field | Value | 2026-02-13 04:52:46.123151 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:46.123169 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-13 04:52:46.123180 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-13 04:52:46.123190 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-13 04:52:46.123200 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-13 04:52:46.123210 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-13 04:52:46.123220 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-13 04:52:46.123246 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-13 04:52:46.123263 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-13 04:52:46.123273 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-13 04:52:46.123283 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-13 04:52:46.123297 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-13 04:52:46.123307 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-13 04:52:46.123317 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-13 04:52:46.123327 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-13 04:52:46.123337 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-13 04:52:46.123347 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-13T04:51:08.000000 | 2026-02-13 04:52:46.123364 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-13 04:52:46.123413 | orchestrator | | accessIPv4 | | 2026-02-13 04:52:46.123431 | orchestrator | | accessIPv6 | | 2026-02-13 04:52:46.123448 | orchestrator | | addresses | test=192.168.112.130, 192.168.200.230 | 2026-02-13 04:52:46.123850 | orchestrator | | config_drive | | 2026-02-13 04:52:46.123869 | orchestrator | | created | 2026-02-13T04:50:43Z | 2026-02-13 04:52:46.123879 | orchestrator | | description | None | 2026-02-13 04:52:46.123889 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-13 04:52:46.123899 | orchestrator | | hostId | 499b1e7837b3ea7023daad013e6b48dd2486a1ef6d3dab94d87dc785 | 2026-02-13 04:52:46.123909 | orchestrator | | host_status | None | 2026-02-13 04:52:46.123936 | orchestrator | | id | b247a693-bc2a-44d6-9294-c5bc3272edb3 | 2026-02-13 04:52:46.123951 | orchestrator | | image | N/A (booted from volume) | 2026-02-13 04:52:46.123961 | orchestrator | | key_name | test | 2026-02-13 04:52:46.123971 | orchestrator | | locked | False | 2026-02-13 04:52:46.123981 | orchestrator | | locked_reason | None | 2026-02-13 04:52:46.123991 | orchestrator | | name | test-3 | 2026-02-13 04:52:46.124001 | orchestrator | | pinned_availability_zone | None | 2026-02-13 04:52:46.124011 | orchestrator | | progress | 0 | 2026-02-13 04:52:46.124021 | orchestrator | | project_id | a61aed73e07548fa9afd58e7f65b79e9 | 2026-02-13 04:52:46.124037 | orchestrator | | properties | hostname='test-3' | 2026-02-13 04:52:46.124054 | orchestrator | | security_groups | name='ssh' | 2026-02-13 04:52:46.124069 | orchestrator | | | name='icmp' | 2026-02-13 04:52:46.124079 | orchestrator | | server_groups | None | 2026-02-13 04:52:46.124089 | orchestrator | | status | ACTIVE | 2026-02-13 04:52:46.124099 | orchestrator | | tags | test | 2026-02-13 04:52:46.124109 | orchestrator | | trusted_image_certificates | None | 2026-02-13 04:52:46.124119 | orchestrator | | updated | 2026-02-13T04:51:42Z | 2026-02-13 04:52:46.124129 | orchestrator | | user_id | 52075b7a1e8340a6a7af2c2ed40e4efa | 2026-02-13 04:52:46.124148 | orchestrator | | volumes_attached | delete_on_termination='True', id='5e9cc30f-e4b7-496a-a6d9-e0e435a0f8d7' | 2026-02-13 04:52:46.128289 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:46.395201 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-13 04:52:49.266187 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:49.266314 | orchestrator | | Field | Value | 2026-02-13 04:52:49.266327 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:49.266336 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-13 04:52:49.266343 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-13 04:52:49.266351 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-13 04:52:49.266358 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-13 04:52:49.266421 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-13 04:52:49.266430 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-13 04:52:49.266453 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-13 04:52:49.266461 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-13 04:52:49.266474 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-13 04:52:49.266481 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-13 04:52:49.266489 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-13 04:52:49.266497 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-13 04:52:49.266504 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-13 04:52:49.266512 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-13 04:52:49.266526 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-13 04:52:49.266534 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-13T04:51:10.000000 | 2026-02-13 04:52:49.266548 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-13 04:52:49.266560 | orchestrator | | accessIPv4 | | 2026-02-13 04:52:49.266568 | orchestrator | | accessIPv6 | | 2026-02-13 04:52:49.266575 | orchestrator | | addresses | test=192.168.112.118, 192.168.200.91 | 2026-02-13 04:52:49.266583 | orchestrator | | config_drive | | 2026-02-13 04:52:49.266590 | orchestrator | | created | 2026-02-13T04:50:44Z | 2026-02-13 04:52:49.266598 | orchestrator | | description | None | 2026-02-13 04:52:49.266611 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-13 04:52:49.266619 | orchestrator | | hostId | f17e664d0621221e6ce8c371b66c2fa727fbf8307604b3e91f0264ed | 2026-02-13 04:52:49.266627 | orchestrator | | host_status | None | 2026-02-13 04:52:49.266640 | orchestrator | | id | 754966ce-5a10-4b32-bc15-7af068b34d61 | 2026-02-13 04:52:49.266651 | orchestrator | | image | N/A (booted from volume) | 2026-02-13 04:52:49.266659 | orchestrator | | key_name | test | 2026-02-13 04:52:49.266667 | orchestrator | | locked | False | 2026-02-13 04:52:49.266674 | orchestrator | | locked_reason | None | 2026-02-13 04:52:49.266682 | orchestrator | | name | test-4 | 2026-02-13 04:52:49.266695 | orchestrator | | pinned_availability_zone | None | 2026-02-13 04:52:49.266702 | orchestrator | | progress | 0 | 2026-02-13 04:52:49.266710 | orchestrator | | project_id | a61aed73e07548fa9afd58e7f65b79e9 | 2026-02-13 04:52:49.266717 | orchestrator | | properties | hostname='test-4' | 2026-02-13 04:52:49.266731 | orchestrator | | security_groups | name='ssh' | 2026-02-13 04:52:49.266743 | orchestrator | | | name='icmp' | 2026-02-13 04:52:49.266751 | orchestrator | | server_groups | None | 2026-02-13 04:52:49.266758 | orchestrator | | status | ACTIVE | 2026-02-13 04:52:49.266766 | orchestrator | | tags | test | 2026-02-13 04:52:49.266779 | orchestrator | | trusted_image_certificates | None | 2026-02-13 04:52:49.266787 | orchestrator | | updated | 2026-02-13T04:51:43Z | 2026-02-13 04:52:49.266794 | orchestrator | | user_id | 52075b7a1e8340a6a7af2c2ed40e4efa | 2026-02-13 04:52:49.266802 | orchestrator | | volumes_attached | delete_on_termination='True', id='08098c97-3083-4da3-9bbb-8b413a1dc2f3' | 2026-02-13 04:52:49.270291 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-13 04:52:49.530821 | orchestrator | + server_ping 2026-02-13 04:52:49.532681 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-13 04:52:49.532743 | orchestrator | ++ tr -d '\r' 2026-02-13 04:52:52.685716 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-13 04:52:52.685818 | orchestrator | + ping -c3 192.168.112.185 2026-02-13 04:52:52.701591 | orchestrator | PING 192.168.112.185 (192.168.112.185) 56(84) bytes of data. 2026-02-13 04:52:52.701690 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=1 ttl=63 time=8.59 ms 2026-02-13 04:52:53.697261 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=2 ttl=63 time=2.17 ms 2026-02-13 04:52:54.699238 | orchestrator | 64 bytes from 192.168.112.185: icmp_seq=3 ttl=63 time=2.41 ms 2026-02-13 04:52:54.699333 | orchestrator | 2026-02-13 04:52:54.699348 | orchestrator | --- 192.168.112.185 ping statistics --- 2026-02-13 04:52:54.699360 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-13 04:52:54.699371 | orchestrator | rtt min/avg/max/mdev = 2.174/4.389/8.587/2.969 ms 2026-02-13 04:52:54.699451 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-13 04:52:54.699475 | orchestrator | + ping -c3 192.168.112.193 2026-02-13 04:52:54.715186 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2026-02-13 04:52:54.715313 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=11.1 ms 2026-02-13 04:52:55.707849 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=2.52 ms 2026-02-13 04:52:56.710120 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=2.13 ms 2026-02-13 04:52:56.710218 | orchestrator | 2026-02-13 04:52:56.710234 | orchestrator | --- 192.168.112.193 ping statistics --- 2026-02-13 04:52:56.710247 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-13 04:52:56.710258 | orchestrator | rtt min/avg/max/mdev = 2.127/5.239/11.073/4.128 ms 2026-02-13 04:52:56.710299 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-13 04:52:56.710312 | orchestrator | + ping -c3 192.168.112.130 2026-02-13 04:52:56.723643 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2026-02-13 04:52:56.723728 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=8.53 ms 2026-02-13 04:52:57.719669 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.48 ms 2026-02-13 04:52:58.721987 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=2.19 ms 2026-02-13 04:52:58.722191 | orchestrator | 2026-02-13 04:52:58.722212 | orchestrator | --- 192.168.112.130 ping statistics --- 2026-02-13 04:52:58.722226 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2026-02-13 04:52:58.722322 | orchestrator | rtt min/avg/max/mdev = 2.190/4.401/8.534/2.924 ms 2026-02-13 04:52:58.722342 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-13 04:52:58.722360 | orchestrator | + ping -c3 192.168.112.102 2026-02-13 04:52:58.733691 | orchestrator | PING 192.168.112.102 (192.168.112.102) 56(84) bytes of data. 2026-02-13 04:52:58.733768 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=1 ttl=63 time=6.36 ms 2026-02-13 04:52:59.731220 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=2 ttl=63 time=3.25 ms 2026-02-13 04:53:00.731221 | orchestrator | 64 bytes from 192.168.112.102: icmp_seq=3 ttl=63 time=2.11 ms 2026-02-13 04:53:00.731332 | orchestrator | 2026-02-13 04:53:00.731349 | orchestrator | --- 192.168.112.102 ping statistics --- 2026-02-13 04:53:00.731362 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-13 04:53:00.731375 | orchestrator | rtt min/avg/max/mdev = 2.110/3.907/6.359/1.795 ms 2026-02-13 04:53:00.732671 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-13 04:53:00.732706 | orchestrator | + ping -c3 192.168.112.118 2026-02-13 04:53:00.748046 | orchestrator | PING 192.168.112.118 (192.168.112.118) 56(84) bytes of data. 2026-02-13 04:53:00.748150 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=1 ttl=63 time=10.7 ms 2026-02-13 04:53:01.741237 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=2 ttl=63 time=2.71 ms 2026-02-13 04:53:02.742590 | orchestrator | 64 bytes from 192.168.112.118: icmp_seq=3 ttl=63 time=2.27 ms 2026-02-13 04:53:02.742718 | orchestrator | 2026-02-13 04:53:02.742737 | orchestrator | --- 192.168.112.118 ping statistics --- 2026-02-13 04:53:02.742751 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-13 04:53:02.742763 | orchestrator | rtt min/avg/max/mdev = 2.270/5.226/10.699/3.873 ms 2026-02-13 04:53:02.743729 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-13 04:53:03.092051 | orchestrator | ok: Runtime: 0:08:06.743018 2026-02-13 04:53:03.155569 | 2026-02-13 04:53:03.155716 | TASK [Run tempest] 2026-02-13 04:53:03.688547 | orchestrator | skipping: Conditional result was False 2026-02-13 04:53:03.705863 | 2026-02-13 04:53:03.706024 | TASK [Check prometheus alert status] 2026-02-13 04:53:04.244634 | orchestrator | skipping: Conditional result was False 2026-02-13 04:53:04.259159 | 2026-02-13 04:53:04.259322 | PLAY [Upgrade testbed] 2026-02-13 04:53:04.270459 | 2026-02-13 04:53:04.270576 | TASK [Print next ceph version] 2026-02-13 04:53:04.340190 | orchestrator | ok 2026-02-13 04:53:04.350814 | 2026-02-13 04:53:04.350956 | TASK [Print next openstack version] 2026-02-13 04:53:04.420781 | orchestrator | ok 2026-02-13 04:53:04.434348 | 2026-02-13 04:53:04.434496 | TASK [Print next manager version] 2026-02-13 04:53:04.502907 | orchestrator | ok 2026-02-13 04:53:04.513561 | 2026-02-13 04:53:04.513694 | TASK [Set cloud fact (Zuul deployment)] 2026-02-13 04:53:04.583443 | orchestrator | ok 2026-02-13 04:53:04.594351 | 2026-02-13 04:53:04.594484 | TASK [Set cloud fact (local deployment)] 2026-02-13 04:53:04.629658 | orchestrator | skipping: Conditional result was False 2026-02-13 04:53:04.645692 | 2026-02-13 04:53:04.645846 | TASK [Fetch manager address] 2026-02-13 04:53:04.939417 | orchestrator | ok 2026-02-13 04:53:04.949579 | 2026-02-13 04:53:04.949705 | TASK [Set manager_host address] 2026-02-13 04:53:05.028303 | orchestrator | ok 2026-02-13 04:53:05.041429 | 2026-02-13 04:53:05.041603 | TASK [Run upgrade] 2026-02-13 04:53:05.750075 | orchestrator | + set -e 2026-02-13 04:53:05.750213 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:53:05.750229 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:53:05.750242 | orchestrator | + CEPH_VERSION=reef 2026-02-13 04:53:05.750249 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-13 04:53:05.750256 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-13 04:53:05.750269 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-13 04:53:05.758855 | orchestrator | + set -e 2026-02-13 04:53:05.758920 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:53:05.759504 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:53:05.759519 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:53:05.759524 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:53:05.759533 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:53:05.761086 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-13 04:53:05.804162 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-13 04:53:05.805070 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-13 04:53:05.843956 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-13 04:53:05.844035 | orchestrator | + echo 2026-02-13 04:53:05.844047 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-13 04:53:05.844052 | orchestrator | + echo 2026-02-13 04:53:05.844066 | orchestrator | 2026-02-13 04:53:05.844073 | orchestrator | # UPGRADE MANAGER 2026-02-13 04:53:05.844078 | orchestrator | 2026-02-13 04:53:05.844083 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:53:05.844089 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:53:05.844094 | orchestrator | + CEPH_VERSION=reef 2026-02-13 04:53:05.844099 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-13 04:53:05.844104 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-13 04:53:05.844109 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-13 04:53:05.852782 | orchestrator | + set -e 2026-02-13 04:53:05.852859 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-13 04:53:05.852871 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:53:05.858687 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-13 04:53:05.858748 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:53:05.863296 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:53:05.868673 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-13 04:53:05.877574 | orchestrator | /opt/configuration ~ 2026-02-13 04:53:05.877622 | orchestrator | + set -e 2026-02-13 04:53:05.877628 | orchestrator | + pushd /opt/configuration 2026-02-13 04:53:05.877633 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 04:53:05.877639 | orchestrator | + source /opt/venv/bin/activate 2026-02-13 04:53:05.878616 | orchestrator | ++ deactivate nondestructive 2026-02-13 04:53:05.878629 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:05.878633 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:05.878637 | orchestrator | ++ hash -r 2026-02-13 04:53:05.878641 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:05.878646 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-13 04:53:05.878650 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-13 04:53:05.878654 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-13 04:53:05.878659 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-13 04:53:05.878663 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-13 04:53:05.878667 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-13 04:53:05.878750 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-13 04:53:05.878757 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:05.878762 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:05.878766 | orchestrator | ++ export PATH 2026-02-13 04:53:05.878770 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:05.878774 | orchestrator | ++ '[' -z '' ']' 2026-02-13 04:53:05.878777 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-13 04:53:05.878781 | orchestrator | ++ PS1='(venv) ' 2026-02-13 04:53:05.878785 | orchestrator | ++ export PS1 2026-02-13 04:53:05.878790 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-13 04:53:05.878796 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-13 04:53:05.879003 | orchestrator | ++ hash -r 2026-02-13 04:53:05.879016 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-13 04:53:07.126643 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-13 04:53:07.127797 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-13 04:53:07.129367 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-13 04:53:07.131073 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-13 04:53:07.132353 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-13 04:53:07.144344 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-13 04:53:07.146260 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-13 04:53:07.147367 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-13 04:53:07.149145 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-13 04:53:07.183488 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-13 04:53:07.184942 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-13 04:53:07.186616 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-13 04:53:07.187948 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-13 04:53:07.191893 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-13 04:53:07.410799 | orchestrator | ++ which gilt 2026-02-13 04:53:07.412705 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-13 04:53:07.412741 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-13 04:53:07.650641 | orchestrator | osism.cfg-generics: 2026-02-13 04:53:07.754290 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-13 04:53:07.755276 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-13 04:53:07.757643 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-13 04:53:07.757697 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-13 04:53:08.623467 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-13 04:53:08.637211 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-13 04:53:08.989590 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-13 04:53:09.046675 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 04:53:09.046753 | orchestrator | + deactivate 2026-02-13 04:53:09.046761 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-13 04:53:09.046767 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:09.046771 | orchestrator | + export PATH 2026-02-13 04:53:09.046776 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-13 04:53:09.046781 | orchestrator | + '[' -n '' ']' 2026-02-13 04:53:09.046785 | orchestrator | + hash -r 2026-02-13 04:53:09.046788 | orchestrator | + '[' -n '' ']' 2026-02-13 04:53:09.046793 | orchestrator | + unset VIRTUAL_ENV 2026-02-13 04:53:09.046797 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-13 04:53:09.046801 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-13 04:53:09.046805 | orchestrator | + unset -f deactivate 2026-02-13 04:53:09.046879 | orchestrator | ~ 2026-02-13 04:53:09.046885 | orchestrator | + popd 2026-02-13 04:53:09.049482 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-13 04:53:09.049509 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-13 04:53:09.052687 | orchestrator | + set -e 2026-02-13 04:53:09.052775 | orchestrator | + NAMESPACE=kolla/release 2026-02-13 04:53:09.052793 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-13 04:53:09.057164 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-13 04:53:09.060714 | orchestrator | + set -e 2026-02-13 04:53:09.060777 | orchestrator | + pushd /opt/configuration 2026-02-13 04:53:09.060791 | orchestrator | /opt/configuration ~ 2026-02-13 04:53:09.060803 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 04:53:09.060815 | orchestrator | + source /opt/venv/bin/activate 2026-02-13 04:53:09.060827 | orchestrator | ++ deactivate nondestructive 2026-02-13 04:53:09.060838 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:09.060849 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:09.060861 | orchestrator | ++ hash -r 2026-02-13 04:53:09.060872 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:09.060884 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-13 04:53:09.060895 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-13 04:53:09.060906 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-13 04:53:09.060917 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-13 04:53:09.060928 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-13 04:53:09.060939 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-13 04:53:09.060955 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-13 04:53:09.060967 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:09.060981 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:09.060992 | orchestrator | ++ export PATH 2026-02-13 04:53:09.061003 | orchestrator | ++ '[' -n '' ']' 2026-02-13 04:53:09.061014 | orchestrator | ++ '[' -z '' ']' 2026-02-13 04:53:09.061025 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-13 04:53:09.061035 | orchestrator | ++ PS1='(venv) ' 2026-02-13 04:53:09.061046 | orchestrator | ++ export PS1 2026-02-13 04:53:09.061057 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-13 04:53:09.061068 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-13 04:53:09.061079 | orchestrator | ++ hash -r 2026-02-13 04:53:09.061168 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-13 04:53:09.584012 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-13 04:53:09.585030 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-13 04:53:09.586489 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-13 04:53:09.587723 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-13 04:53:09.589023 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-13 04:53:09.599610 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-13 04:53:09.601228 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-13 04:53:09.602139 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-13 04:53:09.603627 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-13 04:53:09.639839 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-13 04:53:09.641714 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-13 04:53:09.643466 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-13 04:53:09.644872 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-13 04:53:09.649052 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-13 04:53:09.859226 | orchestrator | ++ which gilt 2026-02-13 04:53:09.862839 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-13 04:53:09.862936 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-13 04:53:10.051518 | orchestrator | osism.cfg-generics: 2026-02-13 04:53:10.136064 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-13 04:53:10.136185 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-13 04:53:10.136618 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-13 04:53:10.136677 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-13 04:53:10.640701 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-13 04:53:10.653160 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-13 04:53:11.122001 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-13 04:53:11.184864 | orchestrator | ~ 2026-02-13 04:53:11.184995 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-13 04:53:11.185024 | orchestrator | + deactivate 2026-02-13 04:53:11.185070 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-13 04:53:11.185096 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-13 04:53:11.185115 | orchestrator | + export PATH 2026-02-13 04:53:11.185134 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-13 04:53:11.185155 | orchestrator | + '[' -n '' ']' 2026-02-13 04:53:11.185174 | orchestrator | + hash -r 2026-02-13 04:53:11.185194 | orchestrator | + '[' -n '' ']' 2026-02-13 04:53:11.185214 | orchestrator | + unset VIRTUAL_ENV 2026-02-13 04:53:11.185233 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-13 04:53:11.185254 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-13 04:53:11.185273 | orchestrator | + unset -f deactivate 2026-02-13 04:53:11.185294 | orchestrator | + popd 2026-02-13 04:53:11.186416 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-13 04:53:11.245040 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-13 04:53:11.245644 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-13 04:53:11.348272 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 04:53:11.348472 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-13 04:53:11.355798 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-13 04:53:11.362833 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-13 04:53:11.429893 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-13 04:53:11.430351 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-13 04:53:11.535584 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-13 04:53:11.535666 | orchestrator | ++ echo true 2026-02-13 04:53:11.535914 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-13 04:53:11.538087 | orchestrator | +++ semver 2024.2 2024.2 2026-02-13 04:53:11.619008 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-13 04:53:11.619109 | orchestrator | +++ semver 2024.2 2025.1 2026-02-13 04:53:11.681752 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-13 04:53:11.681847 | orchestrator | ++ echo false 2026-02-13 04:53:11.682825 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-13 04:53:11.682904 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-13 04:53:11.682920 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-13 04:53:11.682930 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-13 04:53:11.682942 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:53:11.689912 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-13 04:53:11.690006 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-13 04:53:11.710443 | orchestrator | export RABBITMQ3TO4=true 2026-02-13 04:53:11.713114 | orchestrator | + osism update manager 2026-02-13 04:53:17.447995 | orchestrator | Collecting uv 2026-02-13 04:53:17.554101 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-13 04:53:17.578352 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.0 MB) 2026-02-13 04:53:18.470487 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.0/23.0 MB 33.1 MB/s eta 0:00:00 2026-02-13 04:53:18.526995 | orchestrator | Installing collected packages: uv 2026-02-13 04:53:18.978339 | orchestrator | Successfully installed uv-0.10.2 2026-02-13 04:53:19.571505 | orchestrator | Resolved 11 packages in 293ms 2026-02-13 04:53:19.587158 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-13 04:53:19.597549 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-13 04:53:19.610469 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-13 04:53:19.610580 | orchestrator | Downloading ansible (54.5MiB) 2026-02-13 04:53:20.020543 | orchestrator | Downloaded netaddr 2026-02-13 04:53:20.068945 | orchestrator | Downloaded ansible-core 2026-02-13 04:53:20.195302 | orchestrator | Downloaded cryptography 2026-02-13 04:53:26.864698 | orchestrator | Downloaded ansible 2026-02-13 04:53:26.864810 | orchestrator | Prepared 11 packages in 7.29s 2026-02-13 04:53:27.399732 | orchestrator | Installed 11 packages in 533ms 2026-02-13 04:53:27.399816 | orchestrator | + ansible==11.11.0 2026-02-13 04:53:27.399827 | orchestrator | + ansible-core==2.18.13 2026-02-13 04:53:27.399834 | orchestrator | + cffi==2.0.0 2026-02-13 04:53:27.399842 | orchestrator | + cryptography==46.0.5 2026-02-13 04:53:27.399849 | orchestrator | + jinja2==3.1.6 2026-02-13 04:53:27.399856 | orchestrator | + markupsafe==3.0.3 2026-02-13 04:53:27.399863 | orchestrator | + netaddr==1.3.0 2026-02-13 04:53:27.399869 | orchestrator | + packaging==26.0 2026-02-13 04:53:27.399875 | orchestrator | + pycparser==3.0 2026-02-13 04:53:27.399886 | orchestrator | + pyyaml==6.0.3 2026-02-13 04:53:27.399893 | orchestrator | + resolvelib==1.0.1 2026-02-13 04:53:28.599225 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-197660wh0lyjc1/tmpr627a7bl/ansible-collection-servicesxe2qi_z0'... 2026-02-13 04:53:29.986297 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-13 04:53:29.986364 | orchestrator | Already on 'main' 2026-02-13 04:53:30.528958 | orchestrator | Starting galaxy collection install process 2026-02-13 04:53:30.529066 | orchestrator | Process install dependency map 2026-02-13 04:53:30.529090 | orchestrator | Starting collection install process 2026-02-13 04:53:30.529112 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-13 04:53:30.529133 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-13 04:53:30.529152 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-13 04:53:31.055173 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-197706kdnjtvow/tmp97fr97wj/ansible-playbooks-manageraeywm__o'... 2026-02-13 04:53:31.637974 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-13 04:53:31.638095 | orchestrator | Already on 'main' 2026-02-13 04:53:31.895902 | orchestrator | Starting galaxy collection install process 2026-02-13 04:53:31.896070 | orchestrator | Process install dependency map 2026-02-13 04:53:31.896839 | orchestrator | Starting collection install process 2026-02-13 04:53:31.896884 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-13 04:53:31.896892 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-13 04:53:31.896898 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-13 04:53:32.541672 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-13 04:53:32.541778 | orchestrator | -vvvv to see details 2026-02-13 04:53:32.948064 | orchestrator | 2026-02-13 04:53:32.948174 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-13 04:53:32.948189 | orchestrator | 2026-02-13 04:53:32.948201 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-13 04:53:36.983867 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:36.984001 | orchestrator | 2026-02-13 04:53:36.984019 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-13 04:53:37.063309 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 04:53:37.063462 | orchestrator | 2026-02-13 04:53:37.063500 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-13 04:53:38.827042 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:38.827141 | orchestrator | 2026-02-13 04:53:38.827158 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-13 04:53:38.893931 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:38.894097 | orchestrator | 2026-02-13 04:53:38.894125 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-13 04:53:38.973249 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-13 04:53:38.973321 | orchestrator | 2026-02-13 04:53:38.973328 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-13 04:53:43.202300 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-13 04:53:43.202494 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-13 04:53:43.202527 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-13 04:53:43.202562 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-13 04:53:43.202574 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-13 04:53:43.202586 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-13 04:53:43.202597 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-13 04:53:43.202608 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-13 04:53:43.202619 | orchestrator | 2026-02-13 04:53:43.202632 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-13 04:53:44.280362 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:44.280562 | orchestrator | 2026-02-13 04:53:44.280595 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-13 04:53:45.290544 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:45.290667 | orchestrator | 2026-02-13 04:53:45.290695 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-13 04:53:45.386606 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-13 04:53:45.386730 | orchestrator | 2026-02-13 04:53:45.386758 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-13 04:53:47.219234 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-13 04:53:47.219334 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-13 04:53:47.219344 | orchestrator | 2026-02-13 04:53:47.219352 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-13 04:53:48.182059 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:48.182166 | orchestrator | 2026-02-13 04:53:48.182184 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-13 04:53:48.256354 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:53:48.256485 | orchestrator | 2026-02-13 04:53:48.256507 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-13 04:53:48.349986 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-13 04:53:48.350138 | orchestrator | 2026-02-13 04:53:48.350154 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-13 04:53:49.270175 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:49.270290 | orchestrator | 2026-02-13 04:53:49.270308 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-13 04:53:49.346196 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-13 04:53:49.346329 | orchestrator | 2026-02-13 04:53:49.346358 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-13 04:53:51.229619 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-13 04:53:51.229708 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-13 04:53:51.229720 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:51.229729 | orchestrator | 2026-02-13 04:53:51.229738 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-13 04:53:52.165819 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:52.165892 | orchestrator | 2026-02-13 04:53:52.165901 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-13 04:53:52.234299 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:53:52.234395 | orchestrator | 2026-02-13 04:53:52.234451 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-13 04:53:52.336340 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-13 04:53:52.336447 | orchestrator | 2026-02-13 04:53:52.336456 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-13 04:53:53.031706 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:53.031814 | orchestrator | 2026-02-13 04:53:53.031834 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-13 04:53:53.594289 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:53.594379 | orchestrator | 2026-02-13 04:53:53.594389 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-13 04:53:55.495532 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-13 04:53:55.495621 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-13 04:53:55.495630 | orchestrator | 2026-02-13 04:53:55.495638 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-13 04:53:56.686237 | orchestrator | changed: [testbed-manager] 2026-02-13 04:53:56.686359 | orchestrator | 2026-02-13 04:53:56.686384 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-13 04:53:57.251480 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:57.251607 | orchestrator | 2026-02-13 04:53:57.251637 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-13 04:53:57.809564 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:57.809666 | orchestrator | 2026-02-13 04:53:57.809705 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-13 04:53:57.867800 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:53:57.867890 | orchestrator | 2026-02-13 04:53:57.867903 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-13 04:53:57.940282 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-13 04:53:57.940374 | orchestrator | 2026-02-13 04:53:57.940386 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-13 04:53:58.003209 | orchestrator | ok: [testbed-manager] 2026-02-13 04:53:58.003307 | orchestrator | 2026-02-13 04:53:58.003323 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-13 04:54:01.900041 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-13 04:54:01.900144 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-13 04:54:01.900156 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-13 04:54:01.900163 | orchestrator | 2026-02-13 04:54:01.900172 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-13 04:54:02.893233 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:02.893352 | orchestrator | 2026-02-13 04:54:02.893374 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-13 04:54:03.921900 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:03.922097 | orchestrator | 2026-02-13 04:54:03.922131 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-13 04:54:04.868683 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:04.868786 | orchestrator | 2026-02-13 04:54:04.868804 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-13 04:54:04.944565 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-13 04:54:04.944666 | orchestrator | 2026-02-13 04:54:04.944686 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-13 04:54:04.992602 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:04.992673 | orchestrator | 2026-02-13 04:54:04.992681 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-13 04:54:05.991044 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-13 04:54:05.991156 | orchestrator | 2026-02-13 04:54:05.991182 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-13 04:54:06.076193 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-13 04:54:06.076285 | orchestrator | 2026-02-13 04:54:06.076299 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-13 04:54:07.046098 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:07.046212 | orchestrator | 2026-02-13 04:54:07.046229 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-13 04:54:08.115393 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:08.115553 | orchestrator | 2026-02-13 04:54:08.115571 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-13 04:54:08.194874 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:54:08.194968 | orchestrator | 2026-02-13 04:54:08.194982 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-13 04:54:08.266522 | orchestrator | ok: [testbed-manager] 2026-02-13 04:54:08.266638 | orchestrator | 2026-02-13 04:54:08.266662 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-13 04:54:09.629118 | orchestrator | changed: [testbed-manager] 2026-02-13 04:54:09.629222 | orchestrator | 2026-02-13 04:54:09.629237 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-13 04:55:16.347207 | orchestrator | changed: [testbed-manager] 2026-02-13 04:55:16.347326 | orchestrator | 2026-02-13 04:55:16.347342 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-13 04:55:17.572645 | orchestrator | ok: [testbed-manager] 2026-02-13 04:55:17.572799 | orchestrator | 2026-02-13 04:55:17.572831 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-13 04:55:17.633012 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:55:17.633138 | orchestrator | 2026-02-13 04:55:17.633163 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-13 04:55:18.475812 | orchestrator | ok: [testbed-manager] 2026-02-13 04:55:18.475915 | orchestrator | 2026-02-13 04:55:18.475931 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-13 04:55:18.546892 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:55:18.547035 | orchestrator | 2026-02-13 04:55:18.547064 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-13 04:55:18.547084 | orchestrator | 2026-02-13 04:55:18.547100 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-13 04:55:37.052731 | orchestrator | changed: [testbed-manager] 2026-02-13 04:55:37.052844 | orchestrator | 2026-02-13 04:55:37.052861 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-13 04:56:37.115362 | orchestrator | Pausing for 60 seconds 2026-02-13 04:56:37.115825 | orchestrator | changed: [testbed-manager] 2026-02-13 04:56:37.116703 | orchestrator | 2026-02-13 04:56:37.116768 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-13 04:56:37.173834 | orchestrator | ok: [testbed-manager] 2026-02-13 04:56:37.173927 | orchestrator | 2026-02-13 04:56:37.173940 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-13 04:56:40.754595 | orchestrator | changed: [testbed-manager] 2026-02-13 04:56:40.754701 | orchestrator | 2026-02-13 04:56:40.754721 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-13 04:57:43.515542 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-13 04:57:43.515654 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-13 04:57:43.515669 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-13 04:57:43.515681 | orchestrator | changed: [testbed-manager] 2026-02-13 04:57:43.515693 | orchestrator | 2026-02-13 04:57:43.515704 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-13 04:57:54.797523 | orchestrator | changed: [testbed-manager] 2026-02-13 04:57:54.797667 | orchestrator | 2026-02-13 04:57:54.797695 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-13 04:57:54.889810 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-13 04:57:54.889958 | orchestrator | 2026-02-13 04:57:54.889974 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-13 04:57:54.889983 | orchestrator | 2026-02-13 04:57:54.889991 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-13 04:57:54.958882 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:57:54.958989 | orchestrator | 2026-02-13 04:57:54.959007 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-13 04:57:55.040225 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-13 04:57:55.040295 | orchestrator | 2026-02-13 04:57:55.040317 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-13 04:57:56.155831 | orchestrator | changed: [testbed-manager] 2026-02-13 04:57:56.155937 | orchestrator | 2026-02-13 04:57:56.155955 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-13 04:57:59.730791 | orchestrator | ok: [testbed-manager] 2026-02-13 04:57:59.730860 | orchestrator | 2026-02-13 04:57:59.730867 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-13 04:57:59.822647 | orchestrator | ok: [testbed-manager] => { 2026-02-13 04:57:59.822754 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-13 04:57:59.822771 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-13 04:57:59.822782 | orchestrator | "Checking running containers against expected versions...", 2026-02-13 04:57:59.822795 | orchestrator | "", 2026-02-13 04:57:59.822806 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-13 04:57:59.822817 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-13 04:57:59.822828 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.822840 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-13 04:57:59.822851 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.822862 | orchestrator | "", 2026-02-13 04:57:59.822873 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-13 04:57:59.822884 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-13 04:57:59.822895 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.822906 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-13 04:57:59.822917 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.822928 | orchestrator | "", 2026-02-13 04:57:59.822938 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-13 04:57:59.822949 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-13 04:57:59.822960 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.822971 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-13 04:57:59.822982 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.822992 | orchestrator | "", 2026-02-13 04:57:59.823003 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-13 04:57:59.823014 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-13 04:57:59.823024 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823035 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-13 04:57:59.823046 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823057 | orchestrator | "", 2026-02-13 04:57:59.823068 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-13 04:57:59.823078 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-13 04:57:59.823089 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823100 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-13 04:57:59.823110 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823121 | orchestrator | "", 2026-02-13 04:57:59.823132 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-13 04:57:59.823162 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823173 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823186 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823199 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823211 | orchestrator | "", 2026-02-13 04:57:59.823224 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-13 04:57:59.823237 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-13 04:57:59.823249 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823261 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-13 04:57:59.823274 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823286 | orchestrator | "", 2026-02-13 04:57:59.823298 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-13 04:57:59.823311 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-13 04:57:59.823324 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823346 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-13 04:57:59.823358 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823371 | orchestrator | "", 2026-02-13 04:57:59.823384 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-13 04:57:59.823396 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-13 04:57:59.823407 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823417 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-13 04:57:59.823428 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823439 | orchestrator | "", 2026-02-13 04:57:59.823479 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-13 04:57:59.823492 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-13 04:57:59.823503 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823514 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-13 04:57:59.823525 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823536 | orchestrator | "", 2026-02-13 04:57:59.823547 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-13 04:57:59.823558 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823568 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823579 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823590 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823601 | orchestrator | "", 2026-02-13 04:57:59.823611 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-13 04:57:59.823622 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823633 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823644 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823655 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823665 | orchestrator | "", 2026-02-13 04:57:59.823677 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-13 04:57:59.823687 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823698 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823709 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823720 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823731 | orchestrator | "", 2026-02-13 04:57:59.823742 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-13 04:57:59.823752 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823763 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823774 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823802 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823813 | orchestrator | "", 2026-02-13 04:57:59.823824 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-13 04:57:59.823835 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823854 | orchestrator | " Enabled: true", 2026-02-13 04:57:59.823865 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-13 04:57:59.823876 | orchestrator | " Status: ✅ MATCH", 2026-02-13 04:57:59.823887 | orchestrator | "", 2026-02-13 04:57:59.823898 | orchestrator | "=== Summary ===", 2026-02-13 04:57:59.823909 | orchestrator | "Errors (version mismatches): 0", 2026-02-13 04:57:59.823920 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-13 04:57:59.823931 | orchestrator | "", 2026-02-13 04:57:59.823942 | orchestrator | "✅ All running containers match expected versions!" 2026-02-13 04:57:59.823953 | orchestrator | ] 2026-02-13 04:57:59.823964 | orchestrator | } 2026-02-13 04:57:59.823975 | orchestrator | 2026-02-13 04:57:59.823987 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-13 04:57:59.891201 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:57:59.891296 | orchestrator | 2026-02-13 04:57:59.891311 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:57:59.891323 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-13 04:57:59.891333 | orchestrator | 2026-02-13 04:58:12.278222 | orchestrator | 2026-02-13 04:58:12 | INFO  | Task 64868a83-3cb5-4dd1-bc58-3c39abf8ab7e (sync inventory) is running in background. Output coming soon. 2026-02-13 04:58:40.754696 | orchestrator | 2026-02-13 04:58:13 | INFO  | Starting group_vars file reorganization 2026-02-13 04:58:40.754851 | orchestrator | 2026-02-13 04:58:13 | INFO  | Moved 0 file(s) to their respective directories 2026-02-13 04:58:40.754868 | orchestrator | 2026-02-13 04:58:13 | INFO  | Group_vars file reorganization completed 2026-02-13 04:58:40.754899 | orchestrator | 2026-02-13 04:58:16 | INFO  | Starting variable preparation from inventory 2026-02-13 04:58:40.754911 | orchestrator | 2026-02-13 04:58:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-13 04:58:40.754923 | orchestrator | 2026-02-13 04:58:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-13 04:58:40.754935 | orchestrator | 2026-02-13 04:58:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-13 04:58:40.754946 | orchestrator | 2026-02-13 04:58:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-13 04:58:40.754957 | orchestrator | 2026-02-13 04:58:19 | INFO  | Variable preparation completed 2026-02-13 04:58:40.754968 | orchestrator | 2026-02-13 04:58:21 | INFO  | Starting inventory overwrite handling 2026-02-13 04:58:40.754980 | orchestrator | 2026-02-13 04:58:21 | INFO  | Handling group overwrites in 99-overwrite 2026-02-13 04:58:40.754991 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removing group frr:children from 60-generic 2026-02-13 04:58:40.755002 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-13 04:58:40.755013 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-13 04:58:40.755024 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-13 04:58:40.755035 | orchestrator | 2026-02-13 04:58:21 | INFO  | Handling group overwrites in 20-roles 2026-02-13 04:58:40.755047 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-13 04:58:40.755058 | orchestrator | 2026-02-13 04:58:21 | INFO  | Removed 5 group(s) in total 2026-02-13 04:58:40.755069 | orchestrator | 2026-02-13 04:58:21 | INFO  | Inventory overwrite handling completed 2026-02-13 04:58:40.755080 | orchestrator | 2026-02-13 04:58:22 | INFO  | Starting merge of inventory files 2026-02-13 04:58:40.755091 | orchestrator | 2026-02-13 04:58:22 | INFO  | Inventory files merged successfully 2026-02-13 04:58:40.755126 | orchestrator | 2026-02-13 04:58:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-13 04:58:40.755137 | orchestrator | 2026-02-13 04:58:39 | INFO  | Successfully wrote ClusterShell configuration 2026-02-13 04:58:41.095026 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 04:58:41.095115 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-13 04:58:41.095126 | orchestrator | + local max_attempts=60 2026-02-13 04:58:41.095135 | orchestrator | + local name=kolla-ansible 2026-02-13 04:58:41.095142 | orchestrator | + local attempt_num=1 2026-02-13 04:58:41.095698 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-13 04:58:41.133929 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 04:58:41.134000 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-13 04:58:41.134009 | orchestrator | + local max_attempts=60 2026-02-13 04:58:41.134057 | orchestrator | + local name=osism-ansible 2026-02-13 04:58:41.134064 | orchestrator | + local attempt_num=1 2026-02-13 04:58:41.134614 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-13 04:58:41.173794 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-13 04:58:41.173877 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-13 04:58:41.362678 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-13 04:58:41.362793 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-13 04:58:41.362822 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-13 04:58:41.362842 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-13 04:58:41.362866 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-13 04:58:41.362887 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-13 04:58:41.362906 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-13 04:58:41.362924 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-13 04:58:41.362936 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 20 seconds ago 2026-02-13 04:58:41.362947 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-13 04:58:41.362958 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-13 04:58:41.362969 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-13 04:58:41.362980 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-13 04:58:41.363018 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-13 04:58:41.363030 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-13 04:58:41.363041 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-13 04:58:41.367756 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-13 04:58:41.367829 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-13 04:58:41.367843 | orchestrator | + osism apply facts 2026-02-13 04:58:53.404822 | orchestrator | 2026-02-13 04:58:53 | INFO  | Task 00dff1f9-a9ab-40b7-8d94-bbb9684db193 (facts) was prepared for execution. 2026-02-13 04:58:53.404925 | orchestrator | 2026-02-13 04:58:53 | INFO  | It takes a moment until task 00dff1f9-a9ab-40b7-8d94-bbb9684db193 (facts) has been started and output is visible here. 2026-02-13 04:59:13.196813 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-13 04:59:13.196926 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-13 04:59:13.196956 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-13 04:59:13.196968 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-13 04:59:13.196991 | orchestrator | 2026-02-13 04:59:13.197005 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-13 04:59:13.197016 | orchestrator | 2026-02-13 04:59:13.197027 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-13 04:59:13.197039 | orchestrator | Friday 13 February 2026 04:58:59 +0000 (0:00:01.967) 0:00:01.967 ******* 2026-02-13 04:59:13.197050 | orchestrator | ok: [testbed-manager] 2026-02-13 04:59:13.197063 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:59:13.197074 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:59:13.197085 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:59:13.197096 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:59:13.197107 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:59:13.197119 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:59:13.197130 | orchestrator | 2026-02-13 04:59:13.197141 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-13 04:59:13.197152 | orchestrator | Friday 13 February 2026 04:59:01 +0000 (0:00:02.132) 0:00:04.100 ******* 2026-02-13 04:59:13.197163 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:59:13.197174 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:59:13.197204 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:59:13.197216 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:59:13.197231 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:59:13.197244 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:59:13.197255 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:59:13.197266 | orchestrator | 2026-02-13 04:59:13.197278 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-13 04:59:13.197289 | orchestrator | 2026-02-13 04:59:13.197300 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-13 04:59:13.197311 | orchestrator | Friday 13 February 2026 04:59:03 +0000 (0:00:01.839) 0:00:05.939 ******* 2026-02-13 04:59:13.197323 | orchestrator | ok: [testbed-node-0] 2026-02-13 04:59:13.197335 | orchestrator | ok: [testbed-node-1] 2026-02-13 04:59:13.197347 | orchestrator | ok: [testbed-node-2] 2026-02-13 04:59:13.197359 | orchestrator | ok: [testbed-manager] 2026-02-13 04:59:13.197396 | orchestrator | ok: [testbed-node-3] 2026-02-13 04:59:13.197407 | orchestrator | ok: [testbed-node-4] 2026-02-13 04:59:13.197417 | orchestrator | ok: [testbed-node-5] 2026-02-13 04:59:13.197428 | orchestrator | 2026-02-13 04:59:13.197440 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-13 04:59:13.197452 | orchestrator | 2026-02-13 04:59:13.197465 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-13 04:59:13.197505 | orchestrator | Friday 13 February 2026 04:59:10 +0000 (0:00:07.138) 0:00:13.078 ******* 2026-02-13 04:59:13.197517 | orchestrator | skipping: [testbed-manager] 2026-02-13 04:59:13.197529 | orchestrator | skipping: [testbed-node-0] 2026-02-13 04:59:13.197540 | orchestrator | skipping: [testbed-node-1] 2026-02-13 04:59:13.197549 | orchestrator | skipping: [testbed-node-2] 2026-02-13 04:59:13.197559 | orchestrator | skipping: [testbed-node-3] 2026-02-13 04:59:13.197570 | orchestrator | skipping: [testbed-node-4] 2026-02-13 04:59:13.197581 | orchestrator | skipping: [testbed-node-5] 2026-02-13 04:59:13.197592 | orchestrator | 2026-02-13 04:59:13.197603 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 04:59:13.197615 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197627 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197638 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197650 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197661 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197673 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197684 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 04:59:13.197696 | orchestrator | 2026-02-13 04:59:13.197707 | orchestrator | 2026-02-13 04:59:13.197719 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 04:59:13.197730 | orchestrator | Friday 13 February 2026 04:59:12 +0000 (0:00:01.747) 0:00:14.826 ******* 2026-02-13 04:59:13.197741 | orchestrator | =============================================================================== 2026-02-13 04:59:13.197752 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.14s 2026-02-13 04:59:13.197764 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.13s 2026-02-13 04:59:13.197775 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.84s 2026-02-13 04:59:13.197786 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.75s 2026-02-13 04:59:13.521064 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-13 04:59:13.627130 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 04:59:13.628070 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-13 04:59:13.674313 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-13 04:59:13.674408 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-13 04:59:13.682934 | orchestrator | + set -e 2026-02-13 04:59:13.683012 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-13 04:59:13.683026 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-13 04:59:13.695352 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-13 04:59:13.705250 | orchestrator | 2026-02-13 04:59:13.705364 | orchestrator | # UPGRADE SERVICES 2026-02-13 04:59:13.705418 | orchestrator | 2026-02-13 04:59:13.705437 | orchestrator | + set -e 2026-02-13 04:59:13.705456 | orchestrator | + echo 2026-02-13 04:59:13.705535 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-13 04:59:13.705555 | orchestrator | + echo 2026-02-13 04:59:13.705574 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 04:59:13.706613 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 04:59:13.706760 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 04:59:13.706783 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 04:59:13.706796 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 04:59:13.706808 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 04:59:13.706821 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 04:59:13.706832 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:59:13.706845 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:59:13.706858 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 04:59:13.706870 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 04:59:13.706883 | orchestrator | ++ export ARA=false 2026-02-13 04:59:13.706896 | orchestrator | ++ ARA=false 2026-02-13 04:59:13.706908 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 04:59:13.706920 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 04:59:13.706932 | orchestrator | ++ export TEMPEST=false 2026-02-13 04:59:13.706946 | orchestrator | ++ TEMPEST=false 2026-02-13 04:59:13.706958 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 04:59:13.706970 | orchestrator | ++ IS_ZUUL=true 2026-02-13 04:59:13.706983 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:59:13.706997 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:59:13.707010 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 04:59:13.707023 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 04:59:13.707048 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 04:59:13.707062 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 04:59:13.707074 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 04:59:13.707087 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 04:59:13.707101 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 04:59:13.707113 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 04:59:13.707126 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-13 04:59:13.707139 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-13 04:59:13.707173 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-13 04:59:13.707188 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-13 04:59:13.707202 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-13 04:59:13.717218 | orchestrator | + set -e 2026-02-13 04:59:13.717296 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:59:13.718147 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:59:13.718224 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:59:13.718236 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:59:13.718245 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:59:13.718252 | orchestrator | + source /opt/manager-vars.sh 2026-02-13 04:59:13.718261 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-13 04:59:13.718268 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-13 04:59:13.718277 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-13 04:59:13.718285 | orchestrator | ++ CEPH_VERSION=reef 2026-02-13 04:59:13.718293 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-13 04:59:13.718302 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-13 04:59:13.718310 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-13 04:59:13.718318 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-13 04:59:13.718326 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-13 04:59:13.718334 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-13 04:59:13.718342 | orchestrator | ++ export ARA=false 2026-02-13 04:59:13.718356 | orchestrator | ++ ARA=false 2026-02-13 04:59:13.718370 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-13 04:59:13.718383 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-13 04:59:13.718396 | orchestrator | ++ export TEMPEST=false 2026-02-13 04:59:13.718409 | orchestrator | ++ TEMPEST=false 2026-02-13 04:59:13.718422 | orchestrator | ++ export IS_ZUUL=true 2026-02-13 04:59:13.718435 | orchestrator | ++ IS_ZUUL=true 2026-02-13 04:59:13.718450 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:59:13.718635 | orchestrator | 2026-02-13 04:59:13.718651 | orchestrator | # PULL IMAGES 2026-02-13 04:59:13.718659 | orchestrator | 2026-02-13 04:59:13.718667 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.228 2026-02-13 04:59:13.718675 | orchestrator | ++ export EXTERNAL_API=false 2026-02-13 04:59:13.718683 | orchestrator | ++ EXTERNAL_API=false 2026-02-13 04:59:13.718691 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-13 04:59:13.718699 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-13 04:59:13.718707 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-13 04:59:13.718719 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-13 04:59:13.718760 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-13 04:59:13.718774 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-13 04:59:13.718787 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-13 04:59:13.718801 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-13 04:59:13.718815 | orchestrator | + echo 2026-02-13 04:59:13.718828 | orchestrator | + echo '# PULL IMAGES' 2026-02-13 04:59:13.718841 | orchestrator | + echo 2026-02-13 04:59:13.720337 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-13 04:59:13.787535 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 04:59:13.787663 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-13 04:59:15.850862 | orchestrator | 2026-02-13 04:59:15 | INFO  | Trying to run play pull-images in environment custom 2026-02-13 04:59:26.050224 | orchestrator | 2026-02-13 04:59:26 | INFO  | Task 5285514c-3193-4512-9134-18253a2f846a (pull-images) was prepared for execution. 2026-02-13 04:59:26.050306 | orchestrator | 2026-02-13 04:59:26 | INFO  | Task 5285514c-3193-4512-9134-18253a2f846a is running in background. No more output. Check ARA for logs. 2026-02-13 04:59:26.399861 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-13 04:59:26.409027 | orchestrator | + set -e 2026-02-13 04:59:26.409105 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 04:59:26.409122 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 04:59:26.409137 | orchestrator | ++ INTERACTIVE=false 2026-02-13 04:59:26.409150 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 04:59:26.409163 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 04:59:26.409176 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-13 04:59:26.411433 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-13 04:59:26.422798 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:59:26.422855 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-13 04:59:26.422865 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-13 04:59:26.482237 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-13 04:59:26.482333 | orchestrator | + osism apply frr 2026-02-13 04:59:38.763707 | orchestrator | 2026-02-13 04:59:38 | INFO  | Task f9ed99be-ded1-4960-93dd-7cc5a71a593d (frr) was prepared for execution. 2026-02-13 04:59:38.763787 | orchestrator | 2026-02-13 04:59:38 | INFO  | It takes a moment until task f9ed99be-ded1-4960-93dd-7cc5a71a593d (frr) has been started and output is visible here. 2026-02-13 05:00:12.353373 | orchestrator | 2026-02-13 05:00:12.353540 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-13 05:00:12.353559 | orchestrator | 2026-02-13 05:00:12.353572 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-13 05:00:12.353584 | orchestrator | Friday 13 February 2026 04:59:46 +0000 (0:00:03.396) 0:00:03.396 ******* 2026-02-13 05:00:12.353595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 05:00:12.353608 | orchestrator | 2026-02-13 05:00:12.353619 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-13 05:00:12.353635 | orchestrator | Friday 13 February 2026 04:59:49 +0000 (0:00:02.220) 0:00:05.617 ******* 2026-02-13 05:00:12.353654 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.353673 | orchestrator | 2026-02-13 05:00:12.353692 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-13 05:00:12.353710 | orchestrator | Friday 13 February 2026 04:59:51 +0000 (0:00:02.378) 0:00:07.996 ******* 2026-02-13 05:00:12.353729 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.353746 | orchestrator | 2026-02-13 05:00:12.353764 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-13 05:00:12.353784 | orchestrator | Friday 13 February 2026 04:59:54 +0000 (0:00:02.800) 0:00:10.796 ******* 2026-02-13 05:00:12.353802 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.353821 | orchestrator | 2026-02-13 05:00:12.353840 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-13 05:00:12.353858 | orchestrator | Friday 13 February 2026 04:59:56 +0000 (0:00:01.932) 0:00:12.728 ******* 2026-02-13 05:00:12.353911 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.353926 | orchestrator | 2026-02-13 05:00:12.353940 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-13 05:00:12.353953 | orchestrator | Friday 13 February 2026 04:59:58 +0000 (0:00:01.890) 0:00:14.619 ******* 2026-02-13 05:00:12.353965 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.353979 | orchestrator | 2026-02-13 05:00:12.353992 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-13 05:00:12.354005 | orchestrator | Friday 13 February 2026 05:00:00 +0000 (0:00:02.402) 0:00:17.021 ******* 2026-02-13 05:00:12.354079 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:00:12.354094 | orchestrator | 2026-02-13 05:00:12.354109 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-13 05:00:12.354122 | orchestrator | Friday 13 February 2026 05:00:01 +0000 (0:00:01.132) 0:00:18.154 ******* 2026-02-13 05:00:12.354134 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:00:12.354147 | orchestrator | 2026-02-13 05:00:12.354159 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-13 05:00:12.354172 | orchestrator | Friday 13 February 2026 05:00:02 +0000 (0:00:01.161) 0:00:19.316 ******* 2026-02-13 05:00:12.354195 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.354208 | orchestrator | 2026-02-13 05:00:12.354220 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-13 05:00:12.354233 | orchestrator | Friday 13 February 2026 05:00:04 +0000 (0:00:01.938) 0:00:21.255 ******* 2026-02-13 05:00:12.354245 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-13 05:00:12.354275 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-13 05:00:12.354288 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-13 05:00:12.354300 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-13 05:00:12.354311 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-13 05:00:12.354322 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-13 05:00:12.354332 | orchestrator | 2026-02-13 05:00:12.354343 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-13 05:00:12.354354 | orchestrator | Friday 13 February 2026 05:00:09 +0000 (0:00:04.644) 0:00:25.900 ******* 2026-02-13 05:00:12.354365 | orchestrator | ok: [testbed-manager] 2026-02-13 05:00:12.354375 | orchestrator | 2026-02-13 05:00:12.354386 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:00:12.354397 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-13 05:00:12.354408 | orchestrator | 2026-02-13 05:00:12.354418 | orchestrator | 2026-02-13 05:00:12.354429 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:00:12.354440 | orchestrator | Friday 13 February 2026 05:00:12 +0000 (0:00:02.596) 0:00:28.496 ******* 2026-02-13 05:00:12.354451 | orchestrator | =============================================================================== 2026-02-13 05:00:12.354461 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.64s 2026-02-13 05:00:12.354472 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.80s 2026-02-13 05:00:12.354516 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.60s 2026-02-13 05:00:12.354535 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.40s 2026-02-13 05:00:12.354554 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.38s 2026-02-13 05:00:12.354572 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 2.22s 2026-02-13 05:00:12.354590 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.94s 2026-02-13 05:00:12.354613 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.93s 2026-02-13 05:00:12.354644 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.89s 2026-02-13 05:00:12.354656 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.16s 2026-02-13 05:00:12.354667 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.13s 2026-02-13 05:00:12.679749 | orchestrator | + osism apply kubernetes 2026-02-13 05:00:14.735272 | orchestrator | 2026-02-13 05:00:14 | INFO  | Task 274533fb-45ad-4d31-b106-280f27ee0030 (kubernetes) was prepared for execution. 2026-02-13 05:00:14.735365 | orchestrator | 2026-02-13 05:00:14 | INFO  | It takes a moment until task 274533fb-45ad-4d31-b106-280f27ee0030 (kubernetes) has been started and output is visible here. 2026-02-13 05:01:01.358629 | orchestrator | 2026-02-13 05:01:01.358744 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-13 05:01:01.358761 | orchestrator | 2026-02-13 05:01:01.358777 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-13 05:01:01.358798 | orchestrator | Friday 13 February 2026 05:00:21 +0000 (0:00:01.766) 0:00:01.766 ******* 2026-02-13 05:01:01.358816 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.358835 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.358854 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.358874 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.358890 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.358909 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.358927 | orchestrator | 2026-02-13 05:01:01.358946 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-13 05:01:01.358964 | orchestrator | Friday 13 February 2026 05:00:25 +0000 (0:00:04.874) 0:00:06.641 ******* 2026-02-13 05:01:01.358982 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.359000 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.359018 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.359037 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.359054 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.359073 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.359093 | orchestrator | 2026-02-13 05:01:01.359112 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-13 05:01:01.359152 | orchestrator | Friday 13 February 2026 05:00:28 +0000 (0:00:02.424) 0:00:09.065 ******* 2026-02-13 05:01:01.359171 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.359189 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.359207 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.359226 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.359245 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.359262 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.359281 | orchestrator | 2026-02-13 05:01:01.359301 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-13 05:01:01.359321 | orchestrator | Friday 13 February 2026 05:00:30 +0000 (0:00:02.574) 0:00:11.640 ******* 2026-02-13 05:01:01.359339 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.359358 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.359376 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.359395 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.359414 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.359433 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.359452 | orchestrator | 2026-02-13 05:01:01.359471 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-13 05:01:01.359517 | orchestrator | Friday 13 February 2026 05:00:33 +0000 (0:00:02.670) 0:00:14.310 ******* 2026-02-13 05:01:01.359529 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.359540 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.359551 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.359562 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.359598 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.359609 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.359620 | orchestrator | 2026-02-13 05:01:01.359632 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-13 05:01:01.359643 | orchestrator | Friday 13 February 2026 05:00:36 +0000 (0:00:02.436) 0:00:16.746 ******* 2026-02-13 05:01:01.359653 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.359664 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.359675 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.359686 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.359696 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.359708 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.359718 | orchestrator | 2026-02-13 05:01:01.359729 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-13 05:01:01.359740 | orchestrator | Friday 13 February 2026 05:00:39 +0000 (0:00:02.954) 0:00:19.701 ******* 2026-02-13 05:01:01.359751 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.359762 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.359773 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.359783 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.359794 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.359805 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.359816 | orchestrator | 2026-02-13 05:01:01.359827 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-13 05:01:01.359838 | orchestrator | Friday 13 February 2026 05:00:41 +0000 (0:00:02.331) 0:00:22.033 ******* 2026-02-13 05:01:01.359849 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.359860 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.359871 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.359881 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.359903 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.359914 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.359925 | orchestrator | 2026-02-13 05:01:01.359936 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-13 05:01:01.359947 | orchestrator | Friday 13 February 2026 05:00:43 +0000 (0:00:01.923) 0:00:23.956 ******* 2026-02-13 05:01:01.359958 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.359969 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.359980 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.359990 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.360001 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.360012 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.360023 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.360033 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.360044 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.360055 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.360066 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.360077 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.360110 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.360121 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.360132 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.360143 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-13 05:01:01.360154 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-13 05:01:01.360165 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.360175 | orchestrator | 2026-02-13 05:01:01.360194 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-13 05:01:01.360205 | orchestrator | Friday 13 February 2026 05:00:45 +0000 (0:00:02.494) 0:00:26.450 ******* 2026-02-13 05:01:01.360216 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.360226 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.360237 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.360248 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.360258 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.360269 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.360280 | orchestrator | 2026-02-13 05:01:01.360291 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-13 05:01:01.360303 | orchestrator | Friday 13 February 2026 05:00:47 +0000 (0:00:02.192) 0:00:28.643 ******* 2026-02-13 05:01:01.360314 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.360325 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.360335 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.360346 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.360358 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.360377 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.360402 | orchestrator | 2026-02-13 05:01:01.360428 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-13 05:01:01.360445 | orchestrator | Friday 13 February 2026 05:00:50 +0000 (0:00:02.106) 0:00:30.750 ******* 2026-02-13 05:01:01.360462 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:01:01.360479 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:01:01.360619 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:01:01.360638 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:01:01.360656 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:01:01.360675 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:01:01.360694 | orchestrator | 2026-02-13 05:01:01.360713 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-13 05:01:01.360731 | orchestrator | Friday 13 February 2026 05:00:52 +0000 (0:00:02.758) 0:00:33.508 ******* 2026-02-13 05:01:01.360749 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.360768 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.360787 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.360806 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.360824 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.360843 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.360862 | orchestrator | 2026-02-13 05:01:01.360880 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-13 05:01:01.360899 | orchestrator | Friday 13 February 2026 05:00:54 +0000 (0:00:02.030) 0:00:35.538 ******* 2026-02-13 05:01:01.360919 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.360939 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.360958 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.360976 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.360995 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.361014 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.361031 | orchestrator | 2026-02-13 05:01:01.361048 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-13 05:01:01.361068 | orchestrator | Friday 13 February 2026 05:00:57 +0000 (0:00:02.177) 0:00:37.716 ******* 2026-02-13 05:01:01.361085 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.361107 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.361125 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.361142 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.361158 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.361174 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.361190 | orchestrator | 2026-02-13 05:01:01.361205 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-13 05:01:01.361219 | orchestrator | Friday 13 February 2026 05:00:58 +0000 (0:00:01.778) 0:00:39.494 ******* 2026-02-13 05:01:01.361251 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-13 05:01:01.361270 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-13 05:01:01.361286 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.361303 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-13 05:01:01.361320 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-13 05:01:01.361336 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.361353 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-13 05:01:01.361371 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-13 05:01:01.361381 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:01:01.361391 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-13 05:01:01.361400 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-13 05:01:01.361410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:01:01.361420 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-13 05:01:01.361429 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-13 05:01:01.361439 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:01:01.361449 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-13 05:01:01.361458 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-13 05:01:01.361468 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:01:01.361477 | orchestrator | 2026-02-13 05:01:01.361508 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-13 05:01:01.361518 | orchestrator | Friday 13 February 2026 05:01:00 +0000 (0:00:02.060) 0:00:41.555 ******* 2026-02-13 05:01:01.361528 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:01:01.361538 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:01:01.361561 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:03:02.897715 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.897826 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.897837 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.897845 | orchestrator | 2026-02-13 05:03:02.897854 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-13 05:03:02.897862 | orchestrator | Friday 13 February 2026 05:01:02 +0000 (0:00:01.757) 0:00:43.312 ******* 2026-02-13 05:03:02.897869 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:03:02.897876 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:03:02.897883 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:03:02.897890 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.897897 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.897904 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.897911 | orchestrator | 2026-02-13 05:03:02.897918 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-13 05:03:02.897924 | orchestrator | 2026-02-13 05:03:02.897932 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-13 05:03:02.897940 | orchestrator | Friday 13 February 2026 05:01:05 +0000 (0:00:02.721) 0:00:46.034 ******* 2026-02-13 05:03:02.897947 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.897955 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.897977 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.897984 | orchestrator | 2026-02-13 05:03:02.897994 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-13 05:03:02.898001 | orchestrator | Friday 13 February 2026 05:01:07 +0000 (0:00:01.791) 0:00:47.825 ******* 2026-02-13 05:03:02.898008 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898063 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898071 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898077 | orchestrator | 2026-02-13 05:03:02.898084 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-13 05:03:02.898091 | orchestrator | Friday 13 February 2026 05:01:09 +0000 (0:00:02.156) 0:00:49.982 ******* 2026-02-13 05:03:02.898116 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.898123 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:02.898130 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:02.898137 | orchestrator | 2026-02-13 05:03:02.898143 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-13 05:03:02.898151 | orchestrator | Friday 13 February 2026 05:01:11 +0000 (0:00:02.245) 0:00:52.228 ******* 2026-02-13 05:03:02.898159 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898167 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898175 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898183 | orchestrator | 2026-02-13 05:03:02.898191 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-13 05:03:02.898199 | orchestrator | Friday 13 February 2026 05:01:13 +0000 (0:00:01.931) 0:00:54.159 ******* 2026-02-13 05:03:02.898207 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.898215 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898223 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898231 | orchestrator | 2026-02-13 05:03:02.898239 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-13 05:03:02.898247 | orchestrator | Friday 13 February 2026 05:01:14 +0000 (0:00:01.375) 0:00:55.535 ******* 2026-02-13 05:03:02.898255 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898263 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898271 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898279 | orchestrator | 2026-02-13 05:03:02.898287 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-13 05:03:02.898295 | orchestrator | Friday 13 February 2026 05:01:16 +0000 (0:00:01.721) 0:00:57.257 ******* 2026-02-13 05:03:02.898303 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898312 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898319 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898327 | orchestrator | 2026-02-13 05:03:02.898335 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-13 05:03:02.898343 | orchestrator | Friday 13 February 2026 05:01:18 +0000 (0:00:02.300) 0:00:59.557 ******* 2026-02-13 05:03:02.898351 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:03:02.898359 | orchestrator | 2026-02-13 05:03:02.898367 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-13 05:03:02.898376 | orchestrator | Friday 13 February 2026 05:01:20 +0000 (0:00:01.942) 0:01:01.500 ******* 2026-02-13 05:03:02.898384 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898392 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898400 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898408 | orchestrator | 2026-02-13 05:03:02.898416 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-13 05:03:02.898424 | orchestrator | Friday 13 February 2026 05:01:23 +0000 (0:00:02.357) 0:01:03.858 ******* 2026-02-13 05:03:02.898431 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898440 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898447 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898456 | orchestrator | 2026-02-13 05:03:02.898464 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-13 05:03:02.898472 | orchestrator | Friday 13 February 2026 05:01:24 +0000 (0:00:01.794) 0:01:05.652 ******* 2026-02-13 05:03:02.898480 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898488 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898562 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.898571 | orchestrator | 2026-02-13 05:03:02.898577 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-13 05:03:02.898584 | orchestrator | Friday 13 February 2026 05:01:26 +0000 (0:00:01.820) 0:01:07.473 ******* 2026-02-13 05:03:02.898591 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898597 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898604 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.898617 | orchestrator | 2026-02-13 05:03:02.898624 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-13 05:03:02.898631 | orchestrator | Friday 13 February 2026 05:01:29 +0000 (0:00:02.413) 0:01:09.886 ******* 2026-02-13 05:03:02.898638 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.898644 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898666 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898674 | orchestrator | 2026-02-13 05:03:02.898680 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-13 05:03:02.898687 | orchestrator | Friday 13 February 2026 05:01:30 +0000 (0:00:01.484) 0:01:11.371 ******* 2026-02-13 05:03:02.898694 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.898700 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898707 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898714 | orchestrator | 2026-02-13 05:03:02.898721 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-13 05:03:02.898727 | orchestrator | Friday 13 February 2026 05:01:32 +0000 (0:00:01.648) 0:01:13.020 ******* 2026-02-13 05:03:02.898734 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.898741 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:02.898747 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:02.898754 | orchestrator | 2026-02-13 05:03:02.898760 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-13 05:03:02.898767 | orchestrator | Friday 13 February 2026 05:01:34 +0000 (0:00:02.157) 0:01:15.178 ******* 2026-02-13 05:03:02.898774 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898780 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898787 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898794 | orchestrator | 2026-02-13 05:03:02.898800 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-13 05:03:02.898807 | orchestrator | Friday 13 February 2026 05:01:36 +0000 (0:00:01.981) 0:01:17.160 ******* 2026-02-13 05:03:02.898814 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898820 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898827 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898833 | orchestrator | 2026-02-13 05:03:02.898840 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-13 05:03:02.898847 | orchestrator | Friday 13 February 2026 05:01:37 +0000 (0:00:01.370) 0:01:18.530 ******* 2026-02-13 05:03:02.898854 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 05:03:02.898875 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 05:03:02.898882 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-13 05:03:02.898888 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 05:03:02.898895 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 05:03:02.898902 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-13 05:03:02.898909 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 05:03:02.898915 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 05:03:02.898922 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-13 05:03:02.898929 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.898941 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.898947 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.898954 | orchestrator | 2026-02-13 05:03:02.898961 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-13 05:03:02.898968 | orchestrator | Friday 13 February 2026 05:02:11 +0000 (0:00:34.088) 0:01:52.619 ******* 2026-02-13 05:03:02.898975 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:02.898981 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:02.898988 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:02.898995 | orchestrator | 2026-02-13 05:03:02.899001 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-13 05:03:02.899008 | orchestrator | Friday 13 February 2026 05:02:13 +0000 (0:00:01.344) 0:01:53.963 ******* 2026-02-13 05:03:02.899015 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.899021 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:02.899028 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:02.899035 | orchestrator | 2026-02-13 05:03:02.899041 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-13 05:03:02.899048 | orchestrator | Friday 13 February 2026 05:02:15 +0000 (0:00:02.304) 0:01:56.268 ******* 2026-02-13 05:03:02.899054 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.899061 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.899068 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.899074 | orchestrator | 2026-02-13 05:03:02.899081 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-13 05:03:02.899093 | orchestrator | Friday 13 February 2026 05:02:17 +0000 (0:00:02.362) 0:01:58.630 ******* 2026-02-13 05:03:02.899100 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:02.899107 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:02.899114 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:02.899120 | orchestrator | 2026-02-13 05:03:02.899127 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-13 05:03:02.899134 | orchestrator | Friday 13 February 2026 05:03:01 +0000 (0:00:43.174) 0:02:41.805 ******* 2026-02-13 05:03:02.899140 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:02.899147 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:02.899154 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:02.899160 | orchestrator | 2026-02-13 05:03:02.899167 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-13 05:03:02.899178 | orchestrator | Friday 13 February 2026 05:03:02 +0000 (0:00:01.732) 0:02:43.538 ******* 2026-02-13 05:03:55.349717 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.349859 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.349886 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.349905 | orchestrator | 2026-02-13 05:03:55.349926 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-13 05:03:55.349947 | orchestrator | Friday 13 February 2026 05:03:04 +0000 (0:00:01.723) 0:02:45.261 ******* 2026-02-13 05:03:55.349966 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:55.349986 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:55.350004 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:55.350082 | orchestrator | 2026-02-13 05:03:55.350095 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-13 05:03:55.350106 | orchestrator | Friday 13 February 2026 05:03:06 +0000 (0:00:01.926) 0:02:47.188 ******* 2026-02-13 05:03:55.350117 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.350128 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.350139 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.350150 | orchestrator | 2026-02-13 05:03:55.350161 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-13 05:03:55.350172 | orchestrator | Friday 13 February 2026 05:03:08 +0000 (0:00:01.702) 0:02:48.891 ******* 2026-02-13 05:03:55.350183 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.350194 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.350205 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.350243 | orchestrator | 2026-02-13 05:03:55.350273 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-13 05:03:55.350287 | orchestrator | Friday 13 February 2026 05:03:09 +0000 (0:00:01.420) 0:02:50.311 ******* 2026-02-13 05:03:55.350299 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:55.350312 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:55.350325 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:55.350337 | orchestrator | 2026-02-13 05:03:55.350351 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-13 05:03:55.350365 | orchestrator | Friday 13 February 2026 05:03:11 +0000 (0:00:01.890) 0:02:52.202 ******* 2026-02-13 05:03:55.350377 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.350389 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.350402 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.350414 | orchestrator | 2026-02-13 05:03:55.350427 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-13 05:03:55.350439 | orchestrator | Friday 13 February 2026 05:03:13 +0000 (0:00:01.935) 0:02:54.138 ******* 2026-02-13 05:03:55.350452 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:55.350465 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:55.350478 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:55.350489 | orchestrator | 2026-02-13 05:03:55.350554 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-13 05:03:55.350567 | orchestrator | Friday 13 February 2026 05:03:15 +0000 (0:00:01.965) 0:02:56.104 ******* 2026-02-13 05:03:55.350580 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:03:55.350593 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:03:55.350604 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:03:55.350615 | orchestrator | 2026-02-13 05:03:55.350626 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-13 05:03:55.350637 | orchestrator | Friday 13 February 2026 05:03:17 +0000 (0:00:01.977) 0:02:58.082 ******* 2026-02-13 05:03:55.350648 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:55.350659 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:55.350670 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:55.350681 | orchestrator | 2026-02-13 05:03:55.350691 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-13 05:03:55.350703 | orchestrator | Friday 13 February 2026 05:03:18 +0000 (0:00:01.410) 0:02:59.493 ******* 2026-02-13 05:03:55.350714 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:03:55.350724 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:03:55.350735 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:03:55.350746 | orchestrator | 2026-02-13 05:03:55.350757 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-13 05:03:55.350768 | orchestrator | Friday 13 February 2026 05:03:20 +0000 (0:00:01.435) 0:03:00.928 ******* 2026-02-13 05:03:55.350779 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.350790 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.350800 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.350811 | orchestrator | 2026-02-13 05:03:55.350822 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-13 05:03:55.350833 | orchestrator | Friday 13 February 2026 05:03:21 +0000 (0:00:01.724) 0:03:02.652 ******* 2026-02-13 05:03:55.350843 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:03:55.350854 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:03:55.350865 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:03:55.350876 | orchestrator | 2026-02-13 05:03:55.350888 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-13 05:03:55.350900 | orchestrator | Friday 13 February 2026 05:03:23 +0000 (0:00:01.733) 0:03:04.385 ******* 2026-02-13 05:03:55.350911 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 05:03:55.350923 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 05:03:55.350942 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 05:03:55.350953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 05:03:55.350964 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-13 05:03:55.350975 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 05:03:55.350986 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-13 05:03:55.350997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 05:03:55.351030 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-13 05:03:55.351042 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 05:03:55.351052 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-13 05:03:55.351063 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-13 05:03:55.351074 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 05:03:55.351085 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 05:03:55.351095 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-13 05:03:55.351106 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 05:03:55.351117 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 05:03:55.351128 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-13 05:03:55.351139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 05:03:55.351150 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-13 05:03:55.351161 | orchestrator | 2026-02-13 05:03:55.351171 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-13 05:03:55.351182 | orchestrator | 2026-02-13 05:03:55.351193 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-13 05:03:55.351204 | orchestrator | Friday 13 February 2026 05:03:28 +0000 (0:00:04.386) 0:03:08.771 ******* 2026-02-13 05:03:55.351215 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351226 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351237 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351247 | orchestrator | 2026-02-13 05:03:55.351258 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-13 05:03:55.351269 | orchestrator | Friday 13 February 2026 05:03:29 +0000 (0:00:01.381) 0:03:10.153 ******* 2026-02-13 05:03:55.351280 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351291 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351301 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351312 | orchestrator | 2026-02-13 05:03:55.351323 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-13 05:03:55.351334 | orchestrator | Friday 13 February 2026 05:03:31 +0000 (0:00:02.458) 0:03:12.611 ******* 2026-02-13 05:03:55.351345 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351355 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351366 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351376 | orchestrator | 2026-02-13 05:03:55.351387 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-13 05:03:55.351398 | orchestrator | Friday 13 February 2026 05:03:33 +0000 (0:00:01.601) 0:03:14.213 ******* 2026-02-13 05:03:55.351409 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:03:55.351427 | orchestrator | 2026-02-13 05:03:55.351438 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-13 05:03:55.351449 | orchestrator | Friday 13 February 2026 05:03:35 +0000 (0:00:01.732) 0:03:15.946 ******* 2026-02-13 05:03:55.351460 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:03:55.351470 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:03:55.351481 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:03:55.351492 | orchestrator | 2026-02-13 05:03:55.351521 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-13 05:03:55.351532 | orchestrator | Friday 13 February 2026 05:03:36 +0000 (0:00:01.653) 0:03:17.599 ******* 2026-02-13 05:03:55.351543 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:03:55.351554 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:03:55.351566 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:03:55.351577 | orchestrator | 2026-02-13 05:03:55.351588 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-13 05:03:55.351599 | orchestrator | Friday 13 February 2026 05:03:38 +0000 (0:00:01.425) 0:03:19.024 ******* 2026-02-13 05:03:55.351618 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:03:55.351630 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:03:55.351640 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:03:55.351652 | orchestrator | 2026-02-13 05:03:55.351663 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-13 05:03:55.351674 | orchestrator | Friday 13 February 2026 05:03:39 +0000 (0:00:01.436) 0:03:20.461 ******* 2026-02-13 05:03:55.351685 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351696 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351707 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351718 | orchestrator | 2026-02-13 05:03:55.351729 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-13 05:03:55.351740 | orchestrator | Friday 13 February 2026 05:03:41 +0000 (0:00:01.798) 0:03:22.259 ******* 2026-02-13 05:03:55.351751 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351762 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351772 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351783 | orchestrator | 2026-02-13 05:03:55.351794 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-13 05:03:55.351805 | orchestrator | Friday 13 February 2026 05:03:43 +0000 (0:00:02.397) 0:03:24.656 ******* 2026-02-13 05:03:55.351816 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:03:55.351827 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:03:55.351838 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:03:55.351848 | orchestrator | 2026-02-13 05:03:55.351859 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-13 05:03:55.351870 | orchestrator | Friday 13 February 2026 05:03:46 +0000 (0:00:02.383) 0:03:27.039 ******* 2026-02-13 05:03:55.351888 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:05:02.484622 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:05:02.484740 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:05:02.484756 | orchestrator | 2026-02-13 05:05:02.484770 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-13 05:05:02.484782 | orchestrator | 2026-02-13 05:05:02.484793 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-13 05:05:02.484805 | orchestrator | Friday 13 February 2026 05:03:55 +0000 (0:00:08.956) 0:03:35.996 ******* 2026-02-13 05:05:02.484816 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.484828 | orchestrator | 2026-02-13 05:05:02.484839 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-13 05:05:02.484850 | orchestrator | Friday 13 February 2026 05:03:57 +0000 (0:00:02.164) 0:03:38.161 ******* 2026-02-13 05:05:02.484861 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.484872 | orchestrator | 2026-02-13 05:05:02.484884 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-13 05:05:02.484920 | orchestrator | Friday 13 February 2026 05:03:59 +0000 (0:00:01.503) 0:03:39.664 ******* 2026-02-13 05:05:02.484932 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-13 05:05:02.484943 | orchestrator | 2026-02-13 05:05:02.484954 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-13 05:05:02.484979 | orchestrator | Friday 13 February 2026 05:04:00 +0000 (0:00:01.575) 0:03:41.239 ******* 2026-02-13 05:05:02.484990 | orchestrator | changed: [testbed-manager] 2026-02-13 05:05:02.485001 | orchestrator | 2026-02-13 05:05:02.485012 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-13 05:05:02.485023 | orchestrator | Friday 13 February 2026 05:04:02 +0000 (0:00:01.935) 0:03:43.175 ******* 2026-02-13 05:05:02.485034 | orchestrator | changed: [testbed-manager] 2026-02-13 05:05:02.485045 | orchestrator | 2026-02-13 05:05:02.485056 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-13 05:05:02.485067 | orchestrator | Friday 13 February 2026 05:04:04 +0000 (0:00:01.530) 0:03:44.705 ******* 2026-02-13 05:05:02.485078 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-13 05:05:02.485089 | orchestrator | 2026-02-13 05:05:02.485100 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-13 05:05:02.485111 | orchestrator | Friday 13 February 2026 05:04:06 +0000 (0:00:02.880) 0:03:47.586 ******* 2026-02-13 05:05:02.485125 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-13 05:05:02.485138 | orchestrator | 2026-02-13 05:05:02.485152 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-13 05:05:02.485165 | orchestrator | Friday 13 February 2026 05:04:08 +0000 (0:00:01.824) 0:03:49.411 ******* 2026-02-13 05:05:02.485178 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485191 | orchestrator | 2026-02-13 05:05:02.485203 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-13 05:05:02.485216 | orchestrator | Friday 13 February 2026 05:04:10 +0000 (0:00:01.481) 0:03:50.892 ******* 2026-02-13 05:05:02.485228 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485242 | orchestrator | 2026-02-13 05:05:02.485255 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-13 05:05:02.485268 | orchestrator | 2026-02-13 05:05:02.485281 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-13 05:05:02.485295 | orchestrator | Friday 13 February 2026 05:04:11 +0000 (0:00:01.681) 0:03:52.574 ******* 2026-02-13 05:05:02.485308 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485322 | orchestrator | 2026-02-13 05:05:02.485335 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-13 05:05:02.485348 | orchestrator | Friday 13 February 2026 05:04:13 +0000 (0:00:01.162) 0:03:53.736 ******* 2026-02-13 05:05:02.485360 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 05:05:02.485374 | orchestrator | 2026-02-13 05:05:02.485388 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-13 05:05:02.485401 | orchestrator | Friday 13 February 2026 05:04:14 +0000 (0:00:01.470) 0:03:55.206 ******* 2026-02-13 05:05:02.485415 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485428 | orchestrator | 2026-02-13 05:05:02.485441 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-13 05:05:02.485454 | orchestrator | Friday 13 February 2026 05:04:16 +0000 (0:00:01.813) 0:03:57.020 ******* 2026-02-13 05:05:02.485467 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485481 | orchestrator | 2026-02-13 05:05:02.485492 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-13 05:05:02.485503 | orchestrator | Friday 13 February 2026 05:04:18 +0000 (0:00:02.616) 0:03:59.637 ******* 2026-02-13 05:05:02.485607 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485619 | orchestrator | 2026-02-13 05:05:02.485630 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-13 05:05:02.485641 | orchestrator | Friday 13 February 2026 05:04:20 +0000 (0:00:01.479) 0:04:01.116 ******* 2026-02-13 05:05:02.485663 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485674 | orchestrator | 2026-02-13 05:05:02.485685 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-13 05:05:02.485696 | orchestrator | Friday 13 February 2026 05:04:22 +0000 (0:00:01.554) 0:04:02.671 ******* 2026-02-13 05:05:02.485706 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485717 | orchestrator | 2026-02-13 05:05:02.485728 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-13 05:05:02.485739 | orchestrator | Friday 13 February 2026 05:04:23 +0000 (0:00:01.610) 0:04:04.281 ******* 2026-02-13 05:05:02.485750 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485761 | orchestrator | 2026-02-13 05:05:02.485772 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-13 05:05:02.485783 | orchestrator | Friday 13 February 2026 05:04:26 +0000 (0:00:02.511) 0:04:06.793 ******* 2026-02-13 05:05:02.485793 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:02.485804 | orchestrator | 2026-02-13 05:05:02.485815 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-13 05:05:02.485826 | orchestrator | 2026-02-13 05:05:02.485837 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-13 05:05:02.485864 | orchestrator | Friday 13 February 2026 05:04:27 +0000 (0:00:01.724) 0:04:08.518 ******* 2026-02-13 05:05:02.485876 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:05:02.485887 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:05:02.485898 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:05:02.485909 | orchestrator | 2026-02-13 05:05:02.485920 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-13 05:05:02.485930 | orchestrator | Friday 13 February 2026 05:04:29 +0000 (0:00:01.391) 0:04:09.909 ******* 2026-02-13 05:05:02.485941 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:02.485952 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:05:02.485963 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:05:02.485974 | orchestrator | 2026-02-13 05:05:02.485984 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-13 05:05:02.485995 | orchestrator | Friday 13 February 2026 05:04:30 +0000 (0:00:01.681) 0:04:11.591 ******* 2026-02-13 05:05:02.486006 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:05:02.486079 | orchestrator | 2026-02-13 05:05:02.486091 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-13 05:05:02.486102 | orchestrator | Friday 13 February 2026 05:04:32 +0000 (0:00:01.753) 0:04:13.344 ******* 2026-02-13 05:05:02.486114 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486125 | orchestrator | 2026-02-13 05:05:02.486135 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-13 05:05:02.486146 | orchestrator | Friday 13 February 2026 05:04:34 +0000 (0:00:01.894) 0:04:15.240 ******* 2026-02-13 05:05:02.486157 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486169 | orchestrator | 2026-02-13 05:05:02.486179 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-13 05:05:02.486190 | orchestrator | Friday 13 February 2026 05:04:36 +0000 (0:00:01.836) 0:04:17.076 ******* 2026-02-13 05:05:02.486201 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:02.486212 | orchestrator | 2026-02-13 05:05:02.486223 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-13 05:05:02.486234 | orchestrator | Friday 13 February 2026 05:04:37 +0000 (0:00:01.160) 0:04:18.237 ******* 2026-02-13 05:05:02.486245 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486256 | orchestrator | 2026-02-13 05:05:02.486266 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-13 05:05:02.486277 | orchestrator | Friday 13 February 2026 05:04:39 +0000 (0:00:02.069) 0:04:20.307 ******* 2026-02-13 05:05:02.486288 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486307 | orchestrator | 2026-02-13 05:05:02.486318 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-13 05:05:02.486329 | orchestrator | Friday 13 February 2026 05:04:41 +0000 (0:00:02.349) 0:04:22.656 ******* 2026-02-13 05:05:02.486340 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486351 | orchestrator | 2026-02-13 05:05:02.486361 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-13 05:05:02.486372 | orchestrator | Friday 13 February 2026 05:04:43 +0000 (0:00:01.172) 0:04:23.828 ******* 2026-02-13 05:05:02.486383 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486394 | orchestrator | 2026-02-13 05:05:02.486405 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-13 05:05:02.486416 | orchestrator | Friday 13 February 2026 05:04:44 +0000 (0:00:01.124) 0:04:24.953 ******* 2026-02-13 05:05:02.486426 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-13 05:05:02.486437 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-13 05:05:02.486450 | orchestrator | } 2026-02-13 05:05:02.486462 | orchestrator | 2026-02-13 05:05:02.486473 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-13 05:05:02.486483 | orchestrator | Friday 13 February 2026 05:04:45 +0000 (0:00:01.146) 0:04:26.100 ******* 2026-02-13 05:05:02.486494 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:02.486537 | orchestrator | 2026-02-13 05:05:02.486551 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-13 05:05:02.486562 | orchestrator | Friday 13 February 2026 05:04:46 +0000 (0:00:01.156) 0:04:27.256 ******* 2026-02-13 05:05:02.486572 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-13 05:05:02.486584 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-13 05:05:02.486595 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-13 05:05:02.486606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-13 05:05:02.486617 | orchestrator | 2026-02-13 05:05:02.486628 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-13 05:05:02.486639 | orchestrator | Friday 13 February 2026 05:04:52 +0000 (0:00:05.581) 0:04:32.838 ******* 2026-02-13 05:05:02.486650 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486661 | orchestrator | 2026-02-13 05:05:02.486694 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-13 05:05:02.486705 | orchestrator | Friday 13 February 2026 05:04:54 +0000 (0:00:02.369) 0:04:35.207 ******* 2026-02-13 05:05:02.486716 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486727 | orchestrator | 2026-02-13 05:05:02.486738 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-13 05:05:02.486749 | orchestrator | Friday 13 February 2026 05:04:57 +0000 (0:00:02.620) 0:04:37.828 ******* 2026-02-13 05:05:02.486760 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-13 05:05:02.486771 | orchestrator | 2026-02-13 05:05:02.486782 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-13 05:05:02.486793 | orchestrator | Friday 13 February 2026 05:05:01 +0000 (0:00:04.171) 0:04:42.000 ******* 2026-02-13 05:05:02.486804 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:02.486815 | orchestrator | 2026-02-13 05:05:02.486834 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-13 05:05:33.444656 | orchestrator | Friday 13 February 2026 05:05:02 +0000 (0:00:01.125) 0:04:43.125 ******* 2026-02-13 05:05:33.444764 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-13 05:05:33.444778 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-13 05:05:33.444786 | orchestrator | 2026-02-13 05:05:33.444793 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-13 05:05:33.444818 | orchestrator | Friday 13 February 2026 05:05:05 +0000 (0:00:02.962) 0:04:46.088 ******* 2026-02-13 05:05:33.444825 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:33.444832 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:05:33.444839 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:05:33.444845 | orchestrator | 2026-02-13 05:05:33.444852 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-13 05:05:33.444859 | orchestrator | Friday 13 February 2026 05:05:06 +0000 (0:00:01.366) 0:04:47.455 ******* 2026-02-13 05:05:33.444865 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:05:33.444872 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:05:33.444879 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:05:33.444885 | orchestrator | 2026-02-13 05:05:33.444903 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-13 05:05:33.444910 | orchestrator | 2026-02-13 05:05:33.444916 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-13 05:05:33.444923 | orchestrator | Friday 13 February 2026 05:05:08 +0000 (0:00:02.121) 0:04:49.576 ******* 2026-02-13 05:05:33.444929 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:33.444935 | orchestrator | 2026-02-13 05:05:33.444942 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-13 05:05:33.444948 | orchestrator | Friday 13 February 2026 05:05:10 +0000 (0:00:01.114) 0:04:50.691 ******* 2026-02-13 05:05:33.444954 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-13 05:05:33.444961 | orchestrator | 2026-02-13 05:05:33.444967 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-13 05:05:33.444974 | orchestrator | Friday 13 February 2026 05:05:11 +0000 (0:00:01.578) 0:04:52.269 ******* 2026-02-13 05:05:33.444980 | orchestrator | ok: [testbed-manager] 2026-02-13 05:05:33.444986 | orchestrator | 2026-02-13 05:05:33.444992 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-13 05:05:33.444998 | orchestrator | 2026-02-13 05:05:33.445005 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-13 05:05:33.445011 | orchestrator | Friday 13 February 2026 05:05:17 +0000 (0:00:05.674) 0:04:57.944 ******* 2026-02-13 05:05:33.445017 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:05:33.445023 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:05:33.445030 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:05:33.445036 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:05:33.445042 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:05:33.445048 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:05:33.445054 | orchestrator | 2026-02-13 05:05:33.445060 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-13 05:05:33.445067 | orchestrator | Friday 13 February 2026 05:05:19 +0000 (0:00:02.093) 0:05:00.038 ******* 2026-02-13 05:05:33.445074 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 05:05:33.445080 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 05:05:33.445086 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-13 05:05:33.445092 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 05:05:33.445099 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 05:05:33.445105 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-13 05:05:33.445111 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 05:05:33.445117 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 05:05:33.445123 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-13 05:05:33.445130 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 05:05:33.445143 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 05:05:33.445150 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-13 05:05:33.445158 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 05:05:33.445165 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 05:05:33.445172 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-13 05:05:33.445180 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 05:05:33.445187 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 05:05:33.445194 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-13 05:05:33.445201 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 05:05:33.445208 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 05:05:33.445215 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-13 05:05:33.445235 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 05:05:33.445243 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 05:05:33.445250 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-13 05:05:33.445257 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 05:05:33.445265 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 05:05:33.445272 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-13 05:05:33.445279 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 05:05:33.445286 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 05:05:33.445294 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-13 05:05:33.445301 | orchestrator | 2026-02-13 05:05:33.445309 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-13 05:05:33.445316 | orchestrator | Friday 13 February 2026 05:05:28 +0000 (0:00:09.605) 0:05:09.644 ******* 2026-02-13 05:05:33.445324 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:05:33.445331 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:05:33.445338 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:05:33.445345 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:33.445353 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:05:33.445360 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:05:33.445366 | orchestrator | 2026-02-13 05:05:33.445374 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-13 05:05:33.445381 | orchestrator | Friday 13 February 2026 05:05:30 +0000 (0:00:01.908) 0:05:11.552 ******* 2026-02-13 05:05:33.445388 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:05:33.445395 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:05:33.445403 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:05:33.445410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:05:33.445417 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:05:33.445424 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:05:33.445432 | orchestrator | 2026-02-13 05:05:33.445439 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:05:33.445446 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 05:05:33.445456 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 05:05:33.445469 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 05:05:33.445477 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-13 05:05:33.445484 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 05:05:33.445491 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 05:05:33.445498 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-13 05:05:33.445504 | orchestrator | 2026-02-13 05:05:33.445527 | orchestrator | 2026-02-13 05:05:33.445533 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:05:33.445540 | orchestrator | Friday 13 February 2026 05:05:33 +0000 (0:00:02.518) 0:05:14.071 ******* 2026-02-13 05:05:33.445546 | orchestrator | =============================================================================== 2026-02-13 05:05:33.445552 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 43.17s 2026-02-13 05:05:33.445558 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 34.09s 2026-02-13 05:05:33.445565 | orchestrator | Manage labels ----------------------------------------------------------- 9.61s 2026-02-13 05:05:33.445571 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.96s 2026-02-13 05:05:33.445577 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.67s 2026-02-13 05:05:33.445583 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.58s 2026-02-13 05:05:33.445590 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.88s 2026-02-13 05:05:33.445596 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.39s 2026-02-13 05:05:33.445602 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.17s 2026-02-13 05:05:33.445608 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.96s 2026-02-13 05:05:33.445614 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.95s 2026-02-13 05:05:33.445621 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.88s 2026-02-13 05:05:33.445631 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.76s 2026-02-13 05:05:33.915597 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.72s 2026-02-13 05:05:33.915720 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.67s 2026-02-13 05:05:33.915733 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.62s 2026-02-13 05:05:33.915740 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.62s 2026-02-13 05:05:33.915754 | orchestrator | k3s_prereq : Set SELinux to disabled state ------------------------------ 2.57s 2026-02-13 05:05:33.915760 | orchestrator | Manage taints ----------------------------------------------------------- 2.52s 2026-02-13 05:05:33.915766 | orchestrator | kubectl : Install required packages ------------------------------------- 2.51s 2026-02-13 05:05:34.378093 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-13 05:05:34.378195 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-13 05:05:34.387179 | orchestrator | + set -e 2026-02-13 05:05:34.387255 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 05:05:34.387265 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 05:05:34.387273 | orchestrator | ++ INTERACTIVE=false 2026-02-13 05:05:34.387315 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 05:05:34.387325 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 05:05:34.387332 | orchestrator | + osism apply openstackclient 2026-02-13 05:05:46.618153 | orchestrator | 2026-02-13 05:05:46 | INFO  | Task ecc81b2f-0a3f-46e9-9447-729f4b1db4f1 (openstackclient) was prepared for execution. 2026-02-13 05:05:46.618259 | orchestrator | 2026-02-13 05:05:46 | INFO  | It takes a moment until task ecc81b2f-0a3f-46e9-9447-729f4b1db4f1 (openstackclient) has been started and output is visible here. 2026-02-13 05:06:22.549963 | orchestrator | 2026-02-13 05:06:22.550119 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-13 05:06:22.550135 | orchestrator | 2026-02-13 05:06:22.550143 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-13 05:06:22.550152 | orchestrator | Friday 13 February 2026 05:05:53 +0000 (0:00:02.324) 0:00:02.324 ******* 2026-02-13 05:06:22.550163 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-13 05:06:22.550172 | orchestrator | 2026-02-13 05:06:22.550180 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-13 05:06:22.550188 | orchestrator | Friday 13 February 2026 05:05:55 +0000 (0:00:01.820) 0:00:04.145 ******* 2026-02-13 05:06:22.550196 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-13 05:06:22.550217 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-13 05:06:22.550226 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-13 05:06:22.550242 | orchestrator | 2026-02-13 05:06:22.550250 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-13 05:06:22.550258 | orchestrator | Friday 13 February 2026 05:05:57 +0000 (0:00:02.308) 0:00:06.453 ******* 2026-02-13 05:06:22.550266 | orchestrator | changed: [testbed-manager] 2026-02-13 05:06:22.550274 | orchestrator | 2026-02-13 05:06:22.550282 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-13 05:06:22.550291 | orchestrator | Friday 13 February 2026 05:06:00 +0000 (0:00:02.335) 0:00:08.789 ******* 2026-02-13 05:06:22.550300 | orchestrator | ok: [testbed-manager] 2026-02-13 05:06:22.550309 | orchestrator | 2026-02-13 05:06:22.550317 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-13 05:06:22.550325 | orchestrator | Friday 13 February 2026 05:06:02 +0000 (0:00:02.115) 0:00:10.905 ******* 2026-02-13 05:06:22.550333 | orchestrator | ok: [testbed-manager] 2026-02-13 05:06:22.550341 | orchestrator | 2026-02-13 05:06:22.550348 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-13 05:06:22.550355 | orchestrator | Friday 13 February 2026 05:06:04 +0000 (0:00:01.969) 0:00:12.874 ******* 2026-02-13 05:06:22.550363 | orchestrator | ok: [testbed-manager] 2026-02-13 05:06:22.550370 | orchestrator | 2026-02-13 05:06:22.550377 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-13 05:06:22.550386 | orchestrator | Friday 13 February 2026 05:06:05 +0000 (0:00:01.484) 0:00:14.359 ******* 2026-02-13 05:06:22.550394 | orchestrator | changed: [testbed-manager] 2026-02-13 05:06:22.550402 | orchestrator | 2026-02-13 05:06:22.550410 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-13 05:06:22.550418 | orchestrator | Friday 13 February 2026 05:06:16 +0000 (0:00:10.782) 0:00:25.141 ******* 2026-02-13 05:06:22.550426 | orchestrator | changed: [testbed-manager] 2026-02-13 05:06:22.550433 | orchestrator | 2026-02-13 05:06:22.550441 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-13 05:06:22.550449 | orchestrator | Friday 13 February 2026 05:06:18 +0000 (0:00:02.208) 0:00:27.350 ******* 2026-02-13 05:06:22.550457 | orchestrator | changed: [testbed-manager] 2026-02-13 05:06:22.550465 | orchestrator | 2026-02-13 05:06:22.550473 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-13 05:06:22.550481 | orchestrator | Friday 13 February 2026 05:06:20 +0000 (0:00:01.547) 0:00:28.897 ******* 2026-02-13 05:06:22.550557 | orchestrator | ok: [testbed-manager] 2026-02-13 05:06:22.550567 | orchestrator | 2026-02-13 05:06:22.550575 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:06:22.550583 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-13 05:06:22.550591 | orchestrator | 2026-02-13 05:06:22.550599 | orchestrator | 2026-02-13 05:06:22.550606 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:06:22.550614 | orchestrator | Friday 13 February 2026 05:06:22 +0000 (0:00:01.877) 0:00:30.775 ******* 2026-02-13 05:06:22.550621 | orchestrator | =============================================================================== 2026-02-13 05:06:22.550629 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 10.78s 2026-02-13 05:06:22.550637 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.34s 2026-02-13 05:06:22.550645 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.31s 2026-02-13 05:06:22.550653 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.21s 2026-02-13 05:06:22.550661 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 2.12s 2026-02-13 05:06:22.550669 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.97s 2026-02-13 05:06:22.550677 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.88s 2026-02-13 05:06:22.550685 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.82s 2026-02-13 05:06:22.550693 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.55s 2026-02-13 05:06:22.550700 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.48s 2026-02-13 05:06:22.894172 | orchestrator | + osism apply -a upgrade common 2026-02-13 05:06:25.013615 | orchestrator | 2026-02-13 05:06:25 | INFO  | Task c7037d07-c1b3-422a-8800-75548c209614 (common) was prepared for execution. 2026-02-13 05:06:25.013735 | orchestrator | 2026-02-13 05:06:25 | INFO  | It takes a moment until task c7037d07-c1b3-422a-8800-75548c209614 (common) has been started and output is visible here. 2026-02-13 05:06:46.194078 | orchestrator | 2026-02-13 05:06:46.194164 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-13 05:06:46.194176 | orchestrator | 2026-02-13 05:06:46.194184 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-13 05:06:46.194192 | orchestrator | Friday 13 February 2026 05:06:31 +0000 (0:00:02.488) 0:00:02.488 ******* 2026-02-13 05:06:46.194200 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:06:46.194208 | orchestrator | 2026-02-13 05:06:46.194216 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-13 05:06:46.194223 | orchestrator | Friday 13 February 2026 05:06:35 +0000 (0:00:03.807) 0:00:06.296 ******* 2026-02-13 05:06:46.194231 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194239 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194246 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194253 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194261 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194268 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194275 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194282 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194310 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194317 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194325 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194332 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194339 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194346 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194353 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194361 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194368 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-13 05:06:46.194375 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194382 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-13 05:06:46.194390 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194397 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-13 05:06:46.194404 | orchestrator | 2026-02-13 05:06:46.194412 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-13 05:06:46.194419 | orchestrator | Friday 13 February 2026 05:06:40 +0000 (0:00:04.836) 0:00:11.132 ******* 2026-02-13 05:06:46.194426 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:06:46.194434 | orchestrator | 2026-02-13 05:06:46.194442 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-13 05:06:46.194449 | orchestrator | Friday 13 February 2026 05:06:43 +0000 (0:00:03.067) 0:00:14.199 ******* 2026-02-13 05:06:46.194460 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:46.194476 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:46.194505 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:46.194514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:46.194556 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:46.194565 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:46.194761 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:46.194786 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:46.194798 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:46.194827 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129286 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129400 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129414 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129423 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129432 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129442 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:51.129453 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:06:51.129471 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129586 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129599 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129608 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:06:51.129617 | orchestrator | 2026-02-13 05:06:51.129627 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-13 05:06:51.129636 | orchestrator | Friday 13 February 2026 05:06:50 +0000 (0:00:06.557) 0:00:20.756 ******* 2026-02-13 05:06:51.129645 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:51.129656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:51.129666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:51.129675 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:51.129702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:53.181736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.181855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.181878 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.181947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.181967 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:06:53.181985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:53.182000 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:06:53.182074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182126 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:06:53.182142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:53.182171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182179 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:06:53.182188 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:53.182206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:53.182214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182223 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:06:53.182231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:53.182267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461588 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461689 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:06:56.461705 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:06:56.461716 | orchestrator | 2026-02-13 05:06:56.461728 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-13 05:06:56.461738 | orchestrator | Friday 13 February 2026 05:06:53 +0000 (0:00:03.148) 0:00:23.905 ******* 2026-02-13 05:06:56.461751 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:56.461765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:56.461777 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:56.461830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461869 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461879 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:06:56.461889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:56.461900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461929 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:06:56.461958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:06:56.461969 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:06:56.461987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458121 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:07:08.458231 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:07:08.458250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:08.458267 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458280 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:07:08.458292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:08.458331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458354 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:07:08.458379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458392 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:08.458403 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:07:08.458414 | orchestrator | 2026-02-13 05:07:08.458426 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-13 05:07:08.458438 | orchestrator | Friday 13 February 2026 05:06:56 +0000 (0:00:03.283) 0:00:27.189 ******* 2026-02-13 05:07:08.458449 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:07:08.458478 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:07:08.458491 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:07:08.458502 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:07:08.458512 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:07:08.458556 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:07:08.458570 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:07:08.458584 | orchestrator | 2026-02-13 05:07:08.458598 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-13 05:07:08.458611 | orchestrator | Friday 13 February 2026 05:06:58 +0000 (0:00:02.189) 0:00:29.378 ******* 2026-02-13 05:07:08.458627 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:07:08.458646 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:07:08.458672 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:07:08.458698 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:07:08.458713 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:07:08.458731 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:07:08.458749 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:07:08.458766 | orchestrator | 2026-02-13 05:07:08.458803 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-13 05:07:08.458823 | orchestrator | Friday 13 February 2026 05:07:00 +0000 (0:00:02.153) 0:00:31.532 ******* 2026-02-13 05:07:08.458844 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:07:08.458863 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:07:08.458882 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:07:08.458906 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:07:08.458931 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:07:08.458950 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:07:08.458969 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:07:08.458987 | orchestrator | 2026-02-13 05:07:08.459004 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-13 05:07:08.459023 | orchestrator | Friday 13 February 2026 05:07:02 +0000 (0:00:01.933) 0:00:33.465 ******* 2026-02-13 05:07:08.459041 | orchestrator | changed: [testbed-manager] 2026-02-13 05:07:08.459059 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:07:08.459077 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:07:08.459096 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:07:08.459115 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:07:08.459134 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:07:08.459152 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:07:08.459175 | orchestrator | 2026-02-13 05:07:08.459202 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-13 05:07:08.459218 | orchestrator | Friday 13 February 2026 05:07:05 +0000 (0:00:03.245) 0:00:36.710 ******* 2026-02-13 05:07:08.459239 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:08.459259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:08.459288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:08.459305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:08.459349 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:12.986291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986322 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:12.986422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:12.986452 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:12.986490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:34.141934 | orchestrator | 2026-02-13 05:07:34.142108 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-13 05:07:34.142134 | orchestrator | Friday 13 February 2026 05:07:12 +0000 (0:00:06.998) 0:00:43.708 ******* 2026-02-13 05:07:34.142149 | orchestrator | [WARNING]: Skipped 2026-02-13 05:07:34.142165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-13 05:07:34.142181 | orchestrator | to this access issue: 2026-02-13 05:07:34.142196 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-13 05:07:34.142210 | orchestrator | directory 2026-02-13 05:07:34.142225 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 05:07:34.142239 | orchestrator | 2026-02-13 05:07:34.142253 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-13 05:07:34.142267 | orchestrator | Friday 13 February 2026 05:07:15 +0000 (0:00:02.424) 0:00:46.133 ******* 2026-02-13 05:07:34.142281 | orchestrator | [WARNING]: Skipped 2026-02-13 05:07:34.142295 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-13 05:07:34.142309 | orchestrator | to this access issue: 2026-02-13 05:07:34.142324 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-13 05:07:34.142337 | orchestrator | directory 2026-02-13 05:07:34.142350 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 05:07:34.142364 | orchestrator | 2026-02-13 05:07:34.142378 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-13 05:07:34.142393 | orchestrator | Friday 13 February 2026 05:07:17 +0000 (0:00:01.852) 0:00:47.985 ******* 2026-02-13 05:07:34.142406 | orchestrator | [WARNING]: Skipped 2026-02-13 05:07:34.142420 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-13 05:07:34.142433 | orchestrator | to this access issue: 2026-02-13 05:07:34.142447 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-13 05:07:34.142462 | orchestrator | directory 2026-02-13 05:07:34.142478 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 05:07:34.142493 | orchestrator | 2026-02-13 05:07:34.142509 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-13 05:07:34.142540 | orchestrator | Friday 13 February 2026 05:07:19 +0000 (0:00:01.854) 0:00:49.839 ******* 2026-02-13 05:07:34.142555 | orchestrator | [WARNING]: Skipped 2026-02-13 05:07:34.142569 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-13 05:07:34.142585 | orchestrator | to this access issue: 2026-02-13 05:07:34.142601 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-13 05:07:34.142643 | orchestrator | directory 2026-02-13 05:07:34.142659 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-13 05:07:34.142674 | orchestrator | 2026-02-13 05:07:34.142690 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-13 05:07:34.142705 | orchestrator | Friday 13 February 2026 05:07:21 +0000 (0:00:01.914) 0:00:51.754 ******* 2026-02-13 05:07:34.142720 | orchestrator | changed: [testbed-manager] 2026-02-13 05:07:34.142734 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:07:34.142748 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:07:34.142763 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:07:34.142778 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:07:34.142794 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:07:34.142809 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:07:34.142823 | orchestrator | 2026-02-13 05:07:34.142837 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-13 05:07:34.142851 | orchestrator | Friday 13 February 2026 05:07:25 +0000 (0:00:04.630) 0:00:56.385 ******* 2026-02-13 05:07:34.142881 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142896 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142910 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142923 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142937 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142950 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142964 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-13 05:07:34.142978 | orchestrator | 2026-02-13 05:07:34.142992 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-13 05:07:34.143006 | orchestrator | Friday 13 February 2026 05:07:29 +0000 (0:00:03.776) 0:01:00.161 ******* 2026-02-13 05:07:34.143019 | orchestrator | ok: [testbed-manager] 2026-02-13 05:07:34.143033 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:07:34.143047 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:07:34.143061 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:07:34.143075 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:07:34.143089 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:07:34.143103 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:07:34.143116 | orchestrator | 2026-02-13 05:07:34.143129 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-13 05:07:34.143143 | orchestrator | Friday 13 February 2026 05:07:32 +0000 (0:00:03.424) 0:01:03.585 ******* 2026-02-13 05:07:34.143182 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:34.143201 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:34.143224 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:34.143241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:34.143257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:34.143277 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:34.143292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:34.143315 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:43.011222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:43.011359 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:43.011377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:43.011391 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011421 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011434 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:43.011446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:43.011475 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011487 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011507 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011519 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:43.011661 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:43.011674 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:43.011686 | orchestrator | 2026-02-13 05:07:43.011698 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-13 05:07:43.011711 | orchestrator | Friday 13 February 2026 05:07:36 +0000 (0:00:03.248) 0:01:06.834 ******* 2026-02-13 05:07:43.011721 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011733 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011753 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011766 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011779 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011793 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011806 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-13 05:07:43.011819 | orchestrator | 2026-02-13 05:07:43.011832 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-13 05:07:43.011845 | orchestrator | Friday 13 February 2026 05:07:39 +0000 (0:00:03.245) 0:01:10.079 ******* 2026-02-13 05:07:43.011858 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:43.011870 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:43.011891 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:43.011904 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:43.011927 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:48.961040 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:48.961185 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-13 05:07:48.961200 | orchestrator | 2026-02-13 05:07:48.961212 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-13 05:07:48.961223 | orchestrator | Friday 13 February 2026 05:07:42 +0000 (0:00:03.660) 0:01:13.740 ******* 2026-02-13 05:07:48.961237 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:48.961251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:48.961261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:48.961288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:48.961299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:48.961310 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961372 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961406 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961434 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:48.961481 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:51.706665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:51.706744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-13 05:07:51.706753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:51.706773 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:51.706779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:51.706798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:07:51.706804 | orchestrator | 2026-02-13 05:07:51.706809 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-13 05:07:51.706815 | orchestrator | Friday 13 February 2026 05:07:48 +0000 (0:00:05.956) 0:01:19.696 ******* 2026-02-13 05:07:51.706820 | orchestrator | changed: [testbed-manager] => { 2026-02-13 05:07:51.706827 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706831 | orchestrator | } 2026-02-13 05:07:51.706836 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:07:51.706841 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706845 | orchestrator | } 2026-02-13 05:07:51.706850 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:07:51.706855 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706859 | orchestrator | } 2026-02-13 05:07:51.706864 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:07:51.706868 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706873 | orchestrator | } 2026-02-13 05:07:51.706877 | orchestrator | changed: [testbed-node-3] => { 2026-02-13 05:07:51.706882 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706887 | orchestrator | } 2026-02-13 05:07:51.706891 | orchestrator | changed: [testbed-node-4] => { 2026-02-13 05:07:51.706896 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706900 | orchestrator | } 2026-02-13 05:07:51.706905 | orchestrator | changed: [testbed-node-5] => { 2026-02-13 05:07:51.706909 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:07:51.706914 | orchestrator | } 2026-02-13 05:07:51.706918 | orchestrator | 2026-02-13 05:07:51.706923 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:07:51.706949 | orchestrator | Friday 13 February 2026 05:07:51 +0000 (0:00:02.117) 0:01:21.813 ******* 2026-02-13 05:07:51.706955 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:51.706960 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:51.706965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:51.706970 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:07:51.706979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:51.706984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:51.706989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:51.706993 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:07:51.706998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:51.707007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247353 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:07:58.247371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:58.247436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247485 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:07:58.247504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:58.247578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:58.247681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247716 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247736 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:07:58.247757 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:07:58.247784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-13 05:07:58.247805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:07:58.247845 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:07:58.247867 | orchestrator | 2026-02-13 05:07:58.247888 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.247910 | orchestrator | Friday 13 February 2026 05:07:54 +0000 (0:00:03.377) 0:01:25.190 ******* 2026-02-13 05:07:58.247930 | orchestrator | 2026-02-13 05:07:58.247950 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.247968 | orchestrator | Friday 13 February 2026 05:07:54 +0000 (0:00:00.420) 0:01:25.611 ******* 2026-02-13 05:07:58.247981 | orchestrator | 2026-02-13 05:07:58.247994 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.248007 | orchestrator | Friday 13 February 2026 05:07:55 +0000 (0:00:00.468) 0:01:26.080 ******* 2026-02-13 05:07:58.248020 | orchestrator | 2026-02-13 05:07:58.248033 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.248045 | orchestrator | Friday 13 February 2026 05:07:55 +0000 (0:00:00.433) 0:01:26.514 ******* 2026-02-13 05:07:58.248058 | orchestrator | 2026-02-13 05:07:58.248070 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.248096 | orchestrator | Friday 13 February 2026 05:07:56 +0000 (0:00:00.444) 0:01:26.958 ******* 2026-02-13 05:07:58.248109 | orchestrator | 2026-02-13 05:07:58.248122 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.248136 | orchestrator | Friday 13 February 2026 05:07:56 +0000 (0:00:00.705) 0:01:27.664 ******* 2026-02-13 05:07:58.248148 | orchestrator | 2026-02-13 05:07:58.248160 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-13 05:07:58.248191 | orchestrator | Friday 13 February 2026 05:07:57 +0000 (0:00:00.447) 0:01:28.111 ******* 2026-02-13 05:10:28.174953 | orchestrator | 2026-02-13 05:10:28.175038 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-13 05:10:28.175045 | orchestrator | Friday 13 February 2026 05:07:58 +0000 (0:00:00.835) 0:01:28.946 ******* 2026-02-13 05:10:28.175051 | orchestrator | changed: [testbed-manager] 2026-02-13 05:10:28.175056 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:10:28.175061 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:10:28.175065 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:10:28.175069 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:10:28.175073 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:10:28.175077 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:10:28.175081 | orchestrator | 2026-02-13 05:10:28.175086 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-13 05:10:28.175090 | orchestrator | Friday 13 February 2026 05:09:02 +0000 (0:01:03.943) 0:02:32.890 ******* 2026-02-13 05:10:28.175094 | orchestrator | changed: [testbed-manager] 2026-02-13 05:10:28.175098 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:10:28.175102 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:10:28.175106 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:10:28.175109 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:10:28.175113 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:10:28.175117 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:10:28.175121 | orchestrator | 2026-02-13 05:10:28.175125 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-13 05:10:28.175129 | orchestrator | Friday 13 February 2026 05:10:05 +0000 (0:01:03.247) 0:03:36.138 ******* 2026-02-13 05:10:28.175133 | orchestrator | ok: [testbed-manager] 2026-02-13 05:10:28.175138 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:10:28.175143 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:10:28.175146 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:10:28.175150 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:10:28.175154 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:10:28.175158 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:10:28.175162 | orchestrator | 2026-02-13 05:10:28.175166 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-13 05:10:28.175170 | orchestrator | Friday 13 February 2026 05:10:08 +0000 (0:00:03.295) 0:03:39.434 ******* 2026-02-13 05:10:28.175174 | orchestrator | changed: [testbed-manager] 2026-02-13 05:10:28.175178 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:10:28.175183 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:10:28.175187 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:10:28.175191 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:10:28.175207 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:10:28.175211 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:10:28.175215 | orchestrator | 2026-02-13 05:10:28.175219 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:10:28.175223 | orchestrator | testbed-manager : ok=22  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175229 | orchestrator | testbed-node-0 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175233 | orchestrator | testbed-node-1 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175237 | orchestrator | testbed-node-2 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175241 | orchestrator | testbed-node-3 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175245 | orchestrator | testbed-node-4 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175262 | orchestrator | testbed-node-5 : ok=18  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:10:28.175266 | orchestrator | 2026-02-13 05:10:28.175270 | orchestrator | 2026-02-13 05:10:28.175274 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:10:28.175278 | orchestrator | Friday 13 February 2026 05:10:27 +0000 (0:00:19.038) 0:03:58.472 ******* 2026-02-13 05:10:28.175282 | orchestrator | =============================================================================== 2026-02-13 05:10:28.175286 | orchestrator | common : Restart fluentd container ------------------------------------- 63.94s 2026-02-13 05:10:28.175290 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 63.25s 2026-02-13 05:10:28.175293 | orchestrator | common : Restart cron container ---------------------------------------- 19.04s 2026-02-13 05:10:28.175297 | orchestrator | common : Copying over config.json files for services -------------------- 7.00s 2026-02-13 05:10:28.175301 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.56s 2026-02-13 05:10:28.175305 | orchestrator | service-check-containers : common | Check containers -------------------- 5.96s 2026-02-13 05:10:28.175309 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.84s 2026-02-13 05:10:28.175313 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.63s 2026-02-13 05:10:28.175317 | orchestrator | common : include_tasks -------------------------------------------------- 3.81s 2026-02-13 05:10:28.175320 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.78s 2026-02-13 05:10:28.175324 | orchestrator | common : Flush handlers ------------------------------------------------- 3.76s 2026-02-13 05:10:28.175328 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.66s 2026-02-13 05:10:28.175344 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.42s 2026-02-13 05:10:28.175349 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.38s 2026-02-13 05:10:28.175353 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.30s 2026-02-13 05:10:28.175357 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.28s 2026-02-13 05:10:28.175361 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.25s 2026-02-13 05:10:28.175364 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.25s 2026-02-13 05:10:28.175368 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.25s 2026-02-13 05:10:28.175372 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.15s 2026-02-13 05:10:28.484746 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-13 05:10:30.520186 | orchestrator | 2026-02-13 05:10:30 | INFO  | Task 742a9d30-b4b8-4d25-a7e3-fffba456095d (loadbalancer) was prepared for execution. 2026-02-13 05:10:30.520286 | orchestrator | 2026-02-13 05:10:30 | INFO  | It takes a moment until task 742a9d30-b4b8-4d25-a7e3-fffba456095d (loadbalancer) has been started and output is visible here. 2026-02-13 05:11:04.195140 | orchestrator | 2026-02-13 05:11:04.195273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:11:04.195290 | orchestrator | 2026-02-13 05:11:04.195302 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:11:04.195314 | orchestrator | Friday 13 February 2026 05:10:36 +0000 (0:00:01.477) 0:00:01.477 ******* 2026-02-13 05:11:04.195325 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:04.195337 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:04.195348 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:04.195359 | orchestrator | 2026-02-13 05:11:04.195370 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:11:04.195406 | orchestrator | Friday 13 February 2026 05:10:37 +0000 (0:00:01.665) 0:00:03.143 ******* 2026-02-13 05:11:04.195419 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-13 05:11:04.195445 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-13 05:11:04.195456 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-13 05:11:04.195467 | orchestrator | 2026-02-13 05:11:04.195478 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-13 05:11:04.195489 | orchestrator | 2026-02-13 05:11:04.195499 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-13 05:11:04.195510 | orchestrator | Friday 13 February 2026 05:10:39 +0000 (0:00:01.790) 0:00:04.933 ******* 2026-02-13 05:11:04.195522 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:11:04.195563 | orchestrator | 2026-02-13 05:11:04.195582 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-13 05:11:04.195593 | orchestrator | Friday 13 February 2026 05:10:42 +0000 (0:00:02.647) 0:00:07.581 ******* 2026-02-13 05:11:04.195604 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:04.195615 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:04.195626 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:04.195637 | orchestrator | 2026-02-13 05:11:04.195648 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-13 05:11:04.195661 | orchestrator | Friday 13 February 2026 05:10:44 +0000 (0:00:02.226) 0:00:09.807 ******* 2026-02-13 05:11:04.195673 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:04.195686 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:04.195698 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:04.195710 | orchestrator | 2026-02-13 05:11:04.195723 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-13 05:11:04.195735 | orchestrator | Friday 13 February 2026 05:10:46 +0000 (0:00:02.220) 0:00:12.027 ******* 2026-02-13 05:11:04.195748 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:04.195760 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:04.195772 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:04.195785 | orchestrator | 2026-02-13 05:11:04.195799 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-13 05:11:04.195811 | orchestrator | Friday 13 February 2026 05:10:48 +0000 (0:00:01.918) 0:00:13.946 ******* 2026-02-13 05:11:04.195824 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:11:04.195837 | orchestrator | 2026-02-13 05:11:04.195849 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-13 05:11:04.195862 | orchestrator | Friday 13 February 2026 05:10:50 +0000 (0:00:01.860) 0:00:15.807 ******* 2026-02-13 05:11:04.195874 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:04.195887 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:04.195899 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:04.195911 | orchestrator | 2026-02-13 05:11:04.195923 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-13 05:11:04.195936 | orchestrator | Friday 13 February 2026 05:10:52 +0000 (0:00:01.762) 0:00:17.569 ******* 2026-02-13 05:11:04.195948 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.195961 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.195973 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.195986 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.195998 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.196010 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-13 05:11:04.196022 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 05:11:04.196045 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 05:11:04.196056 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-13 05:11:04.196067 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 05:11:04.196077 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 05:11:04.196088 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-13 05:11:04.196099 | orchestrator | 2026-02-13 05:11:04.196110 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-13 05:11:04.196121 | orchestrator | Friday 13 February 2026 05:10:55 +0000 (0:00:03.283) 0:00:20.853 ******* 2026-02-13 05:11:04.196132 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-13 05:11:04.196143 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-13 05:11:04.196154 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-13 05:11:04.196165 | orchestrator | 2026-02-13 05:11:04.196176 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-13 05:11:04.196205 | orchestrator | Friday 13 February 2026 05:10:57 +0000 (0:00:01.859) 0:00:22.712 ******* 2026-02-13 05:11:04.196216 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-13 05:11:04.196227 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-13 05:11:04.196239 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-13 05:11:04.196250 | orchestrator | 2026-02-13 05:11:04.196260 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-13 05:11:04.196271 | orchestrator | Friday 13 February 2026 05:10:59 +0000 (0:00:02.295) 0:00:25.007 ******* 2026-02-13 05:11:04.196282 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-13 05:11:04.196293 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:04.196304 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-13 05:11:04.196322 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:04.196340 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-13 05:11:04.196357 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:04.196384 | orchestrator | 2026-02-13 05:11:04.196402 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-13 05:11:04.196420 | orchestrator | Friday 13 February 2026 05:11:01 +0000 (0:00:01.807) 0:00:26.815 ******* 2026-02-13 05:11:04.196441 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:04.196470 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:04.196490 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:04.196559 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:04.196581 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:04.196616 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:15.165788 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:15.165888 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:15.165901 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:15.165935 | orchestrator | 2026-02-13 05:11:15.165947 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-13 05:11:15.165957 | orchestrator | Friday 13 February 2026 05:11:04 +0000 (0:00:02.713) 0:00:29.529 ******* 2026-02-13 05:11:15.165982 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:15.165993 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:15.166002 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:15.166010 | orchestrator | 2026-02-13 05:11:15.166068 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-13 05:11:15.166078 | orchestrator | Friday 13 February 2026 05:11:06 +0000 (0:00:01.942) 0:00:31.471 ******* 2026-02-13 05:11:15.166087 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-13 05:11:15.166097 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-13 05:11:15.166105 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-13 05:11:15.166114 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-13 05:11:15.166126 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-13 05:11:15.166141 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-13 05:11:15.166155 | orchestrator | 2026-02-13 05:11:15.166177 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-13 05:11:15.166194 | orchestrator | Friday 13 February 2026 05:11:08 +0000 (0:00:02.860) 0:00:34.332 ******* 2026-02-13 05:11:15.166208 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:15.166223 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:15.166237 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:15.166252 | orchestrator | 2026-02-13 05:11:15.166266 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-13 05:11:15.166281 | orchestrator | Friday 13 February 2026 05:11:11 +0000 (0:00:02.257) 0:00:36.589 ******* 2026-02-13 05:11:15.166296 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:11:15.166311 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:11:15.166326 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:11:15.166342 | orchestrator | 2026-02-13 05:11:15.166358 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-13 05:11:15.166375 | orchestrator | Friday 13 February 2026 05:11:13 +0000 (0:00:02.234) 0:00:38.824 ******* 2026-02-13 05:11:15.166394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 05:11:15.166436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:15.166448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:15.166471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:15.166483 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:15.166494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 05:11:15.166504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:15.166515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:15.166526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:15.166558 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:15.166584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 05:11:19.379191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:19.379268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:19.379277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:19.379281 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:19.379287 | orchestrator | 2026-02-13 05:11:19.379292 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-13 05:11:19.379297 | orchestrator | Friday 13 February 2026 05:11:15 +0000 (0:00:01.669) 0:00:40.493 ******* 2026-02-13 05:11:19.379301 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:19.379305 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:19.379323 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:19.379350 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:19.379355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:19.379359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:19.379363 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:19.379366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:19.379373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:19.379385 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:33.006306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2', '__omit_place_holder__3ccdeb5673936d88fee927ed8f3c4a7b5c5c3be2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-13 05:11:33.006320 | orchestrator | 2026-02-13 05:11:33.006331 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-13 05:11:33.006342 | orchestrator | Friday 13 February 2026 05:11:19 +0000 (0:00:04.215) 0:00:44.709 ******* 2026-02-13 05:11:33.006352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:33.006465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:33.006475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:33.006484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:33.006500 | orchestrator | 2026-02-13 05:11:33.006509 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-13 05:11:33.006518 | orchestrator | Friday 13 February 2026 05:11:24 +0000 (0:00:04.820) 0:00:49.530 ******* 2026-02-13 05:11:33.006527 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 05:11:33.006606 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 05:11:33.006616 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-13 05:11:33.006625 | orchestrator | 2026-02-13 05:11:33.006634 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-13 05:11:33.006643 | orchestrator | Friday 13 February 2026 05:11:26 +0000 (0:00:02.733) 0:00:52.263 ******* 2026-02-13 05:11:33.006652 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 05:11:33.006662 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 05:11:33.006670 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-13 05:11:33.006679 | orchestrator | 2026-02-13 05:11:33.006688 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-13 05:11:33.006697 | orchestrator | Friday 13 February 2026 05:11:31 +0000 (0:00:04.263) 0:00:56.526 ******* 2026-02-13 05:11:33.006706 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:33.006716 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:33.006731 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:53.198809 | orchestrator | 2026-02-13 05:11:53.198898 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-13 05:11:53.198909 | orchestrator | Friday 13 February 2026 05:11:32 +0000 (0:00:01.810) 0:00:58.336 ******* 2026-02-13 05:11:53.198917 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 05:11:53.198925 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 05:11:53.198931 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-13 05:11:53.198938 | orchestrator | 2026-02-13 05:11:53.198944 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-13 05:11:53.198951 | orchestrator | Friday 13 February 2026 05:11:35 +0000 (0:00:02.957) 0:01:01.294 ******* 2026-02-13 05:11:53.198957 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 05:11:53.198965 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 05:11:53.198971 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-13 05:11:53.198977 | orchestrator | 2026-02-13 05:11:53.198983 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-13 05:11:53.198989 | orchestrator | Friday 13 February 2026 05:11:38 +0000 (0:00:02.680) 0:01:03.975 ******* 2026-02-13 05:11:53.199033 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:11:53.199040 | orchestrator | 2026-02-13 05:11:53.199046 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-13 05:11:53.199053 | orchestrator | Friday 13 February 2026 05:11:40 +0000 (0:00:01.852) 0:01:05.828 ******* 2026-02-13 05:11:53.199076 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-13 05:11:53.199083 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-13 05:11:53.199090 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-13 05:11:53.199096 | orchestrator | 2026-02-13 05:11:53.199102 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-13 05:11:53.199109 | orchestrator | Friday 13 February 2026 05:11:43 +0000 (0:00:02.697) 0:01:08.525 ******* 2026-02-13 05:11:53.199115 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-13 05:11:53.199121 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-13 05:11:53.199127 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-13 05:11:53.199134 | orchestrator | 2026-02-13 05:11:53.199140 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-13 05:11:53.199146 | orchestrator | Friday 13 February 2026 05:11:45 +0000 (0:00:02.640) 0:01:11.166 ******* 2026-02-13 05:11:53.199152 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:53.199159 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:53.199165 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:53.199172 | orchestrator | 2026-02-13 05:11:53.199178 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-13 05:11:53.199184 | orchestrator | Friday 13 February 2026 05:11:47 +0000 (0:00:01.287) 0:01:12.454 ******* 2026-02-13 05:11:53.199190 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:53.199196 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:53.199203 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:53.199209 | orchestrator | 2026-02-13 05:11:53.199215 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-13 05:11:53.199221 | orchestrator | Friday 13 February 2026 05:11:48 +0000 (0:00:01.854) 0:01:14.309 ******* 2026-02-13 05:11:53.199232 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199242 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199263 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199270 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199282 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199289 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:11:53.199295 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:53.199306 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:53.199317 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:11:56.849968 | orchestrator | 2026-02-13 05:11:56.850149 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-13 05:11:56.850178 | orchestrator | Friday 13 February 2026 05:11:53 +0000 (0:00:04.221) 0:01:18.530 ******* 2026-02-13 05:11:56.850204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 05:11:56.850248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:56.850261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:56.850274 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:56.850287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 05:11:56.850300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:56.850312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:56.850323 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:11:56.850353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 05:11:56.850374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:56.850386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:56.850397 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:11:56.850409 | orchestrator | 2026-02-13 05:11:56.850420 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-13 05:11:56.850431 | orchestrator | Friday 13 February 2026 05:11:54 +0000 (0:00:01.609) 0:01:20.140 ******* 2026-02-13 05:11:56.850443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 05:11:56.850475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:11:56.850496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:11:56.850515 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:11:56.850610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 05:12:08.429748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:12:08.429879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:12:08.429904 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:08.429922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 05:12:08.429934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:12:08.429959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:12:08.429968 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:08.429976 | orchestrator | 2026-02-13 05:12:08.429986 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-13 05:12:08.429995 | orchestrator | Friday 13 February 2026 05:11:56 +0000 (0:00:02.044) 0:01:22.184 ******* 2026-02-13 05:12:08.430074 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 05:12:08.430085 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 05:12:08.430093 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-13 05:12:08.430101 | orchestrator | 2026-02-13 05:12:08.430109 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-13 05:12:08.430117 | orchestrator | Friday 13 February 2026 05:11:59 +0000 (0:00:02.500) 0:01:24.684 ******* 2026-02-13 05:12:08.430124 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 05:12:08.430132 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 05:12:08.430140 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-13 05:12:08.430148 | orchestrator | 2026-02-13 05:12:08.430177 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-13 05:12:08.430191 | orchestrator | Friday 13 February 2026 05:12:01 +0000 (0:00:02.553) 0:01:27.238 ******* 2026-02-13 05:12:08.430204 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 05:12:08.430218 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 05:12:08.430232 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 05:12:08.430246 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:08.430261 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-13 05:12:08.430276 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 05:12:08.430290 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:08.430303 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-13 05:12:08.430313 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:08.430323 | orchestrator | 2026-02-13 05:12:08.430332 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-13 05:12:08.430342 | orchestrator | Friday 13 February 2026 05:12:04 +0000 (0:00:02.440) 0:01:29.678 ******* 2026-02-13 05:12:08.430353 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:12:08.430363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:12:08.430378 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:12:08.430396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:12:08.430415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:12:12.082987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:12:12.083115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:12:12.083133 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:12:12.083144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:12:12.083183 | orchestrator | 2026-02-13 05:12:12.083196 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-13 05:12:12.083207 | orchestrator | Friday 13 February 2026 05:12:08 +0000 (0:00:04.077) 0:01:33.756 ******* 2026-02-13 05:12:12.083220 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:12:12.083239 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:12:12.083255 | orchestrator | } 2026-02-13 05:12:12.083273 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:12:12.083290 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:12:12.083306 | orchestrator | } 2026-02-13 05:12:12.083321 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:12:12.083338 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:12:12.083355 | orchestrator | } 2026-02-13 05:12:12.083373 | orchestrator | 2026-02-13 05:12:12.083389 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:12:12.083407 | orchestrator | Friday 13 February 2026 05:12:09 +0000 (0:00:01.394) 0:01:35.151 ******* 2026-02-13 05:12:12.083425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 05:12:12.083467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:12:12.083508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:12:12.083527 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:12.083575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 05:12:12.083586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:12:12.083609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:12:12.083626 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:12.083637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 05:12:12.083648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:12:12.083669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:12:18.240862 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:18.240978 | orchestrator | 2026-02-13 05:12:18.241004 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-13 05:12:18.241026 | orchestrator | Friday 13 February 2026 05:12:12 +0000 (0:00:02.257) 0:01:37.409 ******* 2026-02-13 05:12:18.241045 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:12:18.241064 | orchestrator | 2026-02-13 05:12:18.241083 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-13 05:12:18.241103 | orchestrator | Friday 13 February 2026 05:12:14 +0000 (0:00:02.069) 0:01:39.478 ******* 2026-02-13 05:12:18.241129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:18.241181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:18.241212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:18.241225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:18.241256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:18.241269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:18.241289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:18.241300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:18.241317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:18.241330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:18.241349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.934851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.935006 | orchestrator | 2026-02-13 05:12:19.935032 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-13 05:12:19.935051 | orchestrator | Friday 13 February 2026 05:12:19 +0000 (0:00:05.179) 0:01:44.658 ******* 2026-02-13 05:12:19.935066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:19.935100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:19.935117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.935130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.935144 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:19.935183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:19.935211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:19.935226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.935246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:19.935262 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:19.935273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:19.935281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-13 05:12:19.935302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169454 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:34.169469 | orchestrator | 2026-02-13 05:12:34.169480 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-13 05:12:34.169490 | orchestrator | Friday 13 February 2026 05:12:21 +0000 (0:00:01.704) 0:01:46.363 ******* 2026-02-13 05:12:34.169500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169523 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:34.169532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169637 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:34.169646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:34.169665 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:34.169674 | orchestrator | 2026-02-13 05:12:34.169683 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-13 05:12:34.169692 | orchestrator | Friday 13 February 2026 05:12:23 +0000 (0:00:02.104) 0:01:48.467 ******* 2026-02-13 05:12:34.169701 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:12:34.169711 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:12:34.169720 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:12:34.169729 | orchestrator | 2026-02-13 05:12:34.169737 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-13 05:12:34.169746 | orchestrator | Friday 13 February 2026 05:12:25 +0000 (0:00:02.234) 0:01:50.701 ******* 2026-02-13 05:12:34.169776 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:12:34.169785 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:12:34.169794 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:12:34.169802 | orchestrator | 2026-02-13 05:12:34.169811 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-13 05:12:34.169820 | orchestrator | Friday 13 February 2026 05:12:28 +0000 (0:00:02.787) 0:01:53.489 ******* 2026-02-13 05:12:34.169829 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:12:34.169837 | orchestrator | 2026-02-13 05:12:34.169846 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-13 05:12:34.169855 | orchestrator | Friday 13 February 2026 05:12:29 +0000 (0:00:01.553) 0:01:55.042 ******* 2026-02-13 05:12:34.169882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:34.169895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:34.169937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:34.169963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:12:35.759755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.759857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.759896 | orchestrator | 2026-02-13 05:12:35.759909 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-13 05:12:35.759921 | orchestrator | Friday 13 February 2026 05:12:34 +0000 (0:00:04.460) 0:01:59.503 ******* 2026-02-13 05:12:35.759951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:35.759964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.759975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.759986 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:35.760023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:35.760035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.760054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.760064 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:35.760074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:12:35.760085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-13 05:12:35.760102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:12:51.639959 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:51.640079 | orchestrator | 2026-02-13 05:12:51.640094 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-13 05:12:51.640119 | orchestrator | Friday 13 February 2026 05:12:35 +0000 (0:00:01.596) 0:02:01.099 ******* 2026-02-13 05:12:51.640129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640170 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:51.640179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640196 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:51.640204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:12:51.640220 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:51.640228 | orchestrator | 2026-02-13 05:12:51.640237 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-13 05:12:51.640245 | orchestrator | Friday 13 February 2026 05:12:37 +0000 (0:00:01.704) 0:02:02.804 ******* 2026-02-13 05:12:51.640253 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:12:51.640261 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:12:51.640275 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:12:51.640288 | orchestrator | 2026-02-13 05:12:51.640301 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-13 05:12:51.640313 | orchestrator | Friday 13 February 2026 05:12:39 +0000 (0:00:02.270) 0:02:05.075 ******* 2026-02-13 05:12:51.640326 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:12:51.640338 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:12:51.640349 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:12:51.640360 | orchestrator | 2026-02-13 05:12:51.640373 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-13 05:12:51.640386 | orchestrator | Friday 13 February 2026 05:12:42 +0000 (0:00:02.770) 0:02:07.846 ******* 2026-02-13 05:12:51.640399 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:51.640413 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:51.640427 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:12:51.640441 | orchestrator | 2026-02-13 05:12:51.640451 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-13 05:12:51.640460 | orchestrator | Friday 13 February 2026 05:12:43 +0000 (0:00:01.428) 0:02:09.274 ******* 2026-02-13 05:12:51.640468 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:12:51.640475 | orchestrator | 2026-02-13 05:12:51.640483 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-13 05:12:51.640491 | orchestrator | Friday 13 February 2026 05:12:45 +0000 (0:00:01.644) 0:02:10.918 ******* 2026-02-13 05:12:51.640501 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 05:12:51.640571 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 05:12:51.640584 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-13 05:12:51.640592 | orchestrator | 2026-02-13 05:12:51.640600 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-13 05:12:51.640609 | orchestrator | Friday 13 February 2026 05:12:49 +0000 (0:00:03.483) 0:02:14.402 ******* 2026-02-13 05:12:51.640617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 05:12:51.640626 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:12:51.640634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 05:12:51.640648 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:12:51.640663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-13 05:13:03.247166 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:03.247291 | orchestrator | 2026-02-13 05:13:03.247308 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-13 05:13:03.247322 | orchestrator | Friday 13 February 2026 05:12:51 +0000 (0:00:02.568) 0:02:16.970 ******* 2026-02-13 05:13:03.247336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247408 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:03.247420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247444 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:03.247455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-13 05:13:03.247576 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:03.247592 | orchestrator | 2026-02-13 05:13:03.247604 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-13 05:13:03.247615 | orchestrator | Friday 13 February 2026 05:12:54 +0000 (0:00:02.710) 0:02:19.681 ******* 2026-02-13 05:13:03.247627 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:03.247638 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:03.247649 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:03.247660 | orchestrator | 2026-02-13 05:13:03.247673 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-13 05:13:03.247686 | orchestrator | Friday 13 February 2026 05:12:55 +0000 (0:00:01.448) 0:02:21.130 ******* 2026-02-13 05:13:03.247699 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:03.247719 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:03.247740 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:03.247769 | orchestrator | 2026-02-13 05:13:03.247789 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-13 05:13:03.247807 | orchestrator | Friday 13 February 2026 05:12:58 +0000 (0:00:02.268) 0:02:23.398 ******* 2026-02-13 05:13:03.247828 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:13:03.247847 | orchestrator | 2026-02-13 05:13:03.247866 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-13 05:13:03.247885 | orchestrator | Friday 13 February 2026 05:12:59 +0000 (0:00:01.706) 0:02:25.104 ******* 2026-02-13 05:13:03.247949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:03.247978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:03.247999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:03.248026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:03.248042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:03.248072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.214914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:05.215084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215157 | orchestrator | 2026-02-13 05:13:05.215170 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-13 05:13:05.215183 | orchestrator | Friday 13 February 2026 05:13:04 +0000 (0:00:04.600) 0:02:29.705 ******* 2026-02-13 05:13:05.215196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:05.215217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:05.215257 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:05.215286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:16.596845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.596992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.597009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.597020 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:16.597034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:16.597062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.597090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.597106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-13 05:13:16.597116 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:16.597125 | orchestrator | 2026-02-13 05:13:16.597135 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-13 05:13:16.597145 | orchestrator | Friday 13 February 2026 05:13:06 +0000 (0:00:01.991) 0:02:31.697 ******* 2026-02-13 05:13:16.597155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597176 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:16.597185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597203 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:16.597212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:16.597235 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:16.597244 | orchestrator | 2026-02-13 05:13:16.597253 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-13 05:13:16.597262 | orchestrator | Friday 13 February 2026 05:13:08 +0000 (0:00:01.946) 0:02:33.643 ******* 2026-02-13 05:13:16.597271 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:13:16.597280 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:13:16.597289 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:13:16.597298 | orchestrator | 2026-02-13 05:13:16.597306 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-13 05:13:16.597321 | orchestrator | Friday 13 February 2026 05:13:10 +0000 (0:00:02.216) 0:02:35.859 ******* 2026-02-13 05:13:16.597330 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:13:16.597338 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:13:16.597347 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:13:16.597355 | orchestrator | 2026-02-13 05:13:16.597365 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-13 05:13:16.597375 | orchestrator | Friday 13 February 2026 05:13:13 +0000 (0:00:02.917) 0:02:38.777 ******* 2026-02-13 05:13:16.597385 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:16.597395 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:16.597405 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:16.597415 | orchestrator | 2026-02-13 05:13:16.597424 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-13 05:13:16.597435 | orchestrator | Friday 13 February 2026 05:13:15 +0000 (0:00:01.745) 0:02:40.522 ******* 2026-02-13 05:13:16.597445 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:16.597455 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:16.597491 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:21.829200 | orchestrator | 2026-02-13 05:13:21.829318 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-13 05:13:21.829334 | orchestrator | Friday 13 February 2026 05:13:16 +0000 (0:00:01.410) 0:02:41.933 ******* 2026-02-13 05:13:21.829344 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:13:21.829354 | orchestrator | 2026-02-13 05:13:21.829364 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-13 05:13:21.829374 | orchestrator | Friday 13 February 2026 05:13:18 +0000 (0:00:01.709) 0:02:43.642 ******* 2026-02-13 05:13:21.829390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:21.829406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:21.829419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:21.829531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:21.829545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:21.829576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:21.829587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:21.829597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:21.829614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:13:21.829633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:21.829654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:23.622528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:23.622811 | orchestrator | 2026-02-13 05:13:23.622826 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-13 05:13:23.622839 | orchestrator | Friday 13 February 2026 05:13:22 +0000 (0:00:04.705) 0:02:48.348 ******* 2026-02-13 05:13:23.622858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:23.622874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:23.622895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847283 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:24.847298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:24.847333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:24.847348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:24.847408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:13:24.847429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056327 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:40.056461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-13 05:13:40.056496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-13 05:13:40.056546 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:40.056555 | orchestrator | 2026-02-13 05:13:40.056564 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-13 05:13:40.056573 | orchestrator | Friday 13 February 2026 05:13:24 +0000 (0:00:01.839) 0:02:50.188 ******* 2026-02-13 05:13:40.056606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056627 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:40.056636 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056652 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:40.056660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:13:40.056676 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:40.056684 | orchestrator | 2026-02-13 05:13:40.056695 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-13 05:13:40.056704 | orchestrator | Friday 13 February 2026 05:13:26 +0000 (0:00:01.929) 0:02:52.117 ******* 2026-02-13 05:13:40.056712 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:13:40.056720 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:13:40.056728 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:13:40.056736 | orchestrator | 2026-02-13 05:13:40.056744 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-13 05:13:40.056752 | orchestrator | Friday 13 February 2026 05:13:28 +0000 (0:00:02.217) 0:02:54.334 ******* 2026-02-13 05:13:40.056759 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:13:40.056767 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:13:40.056775 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:13:40.056783 | orchestrator | 2026-02-13 05:13:40.056790 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-13 05:13:40.056798 | orchestrator | Friday 13 February 2026 05:13:32 +0000 (0:00:03.594) 0:02:57.929 ******* 2026-02-13 05:13:40.056806 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:40.056814 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:40.056822 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:13:40.056830 | orchestrator | 2026-02-13 05:13:40.056837 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-13 05:13:40.056845 | orchestrator | Friday 13 February 2026 05:13:33 +0000 (0:00:01.354) 0:02:59.284 ******* 2026-02-13 05:13:40.056853 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:13:40.056861 | orchestrator | 2026-02-13 05:13:40.056869 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-13 05:13:40.056877 | orchestrator | Friday 13 February 2026 05:13:35 +0000 (0:00:01.751) 0:03:01.036 ******* 2026-02-13 05:13:40.056896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 05:13:41.162294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:13:41.162463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 05:13:41.162562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:13:41.162586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-13 05:13:41.162632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:13:44.150992 | orchestrator | 2026-02-13 05:13:44.151127 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-13 05:13:44.151153 | orchestrator | Friday 13 February 2026 05:13:41 +0000 (0:00:05.472) 0:03:06.508 ******* 2026-02-13 05:13:44.151177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 05:13:44.151231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:13:44.151253 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:13:44.151311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 05:13:44.151334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:13:44.151346 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:13:44.151370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-13 05:14:03.289690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-13 05:14:03.289800 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:03.289816 | orchestrator | 2026-02-13 05:14:03.289828 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-13 05:14:03.289839 | orchestrator | Friday 13 February 2026 05:13:45 +0000 (0:00:04.089) 0:03:10.597 ******* 2026-02-13 05:14:03.289851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.289879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.289892 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:03.289903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.289951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.289963 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:03.290001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.290012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-13 05:14:03.290075 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:03.290086 | orchestrator | 2026-02-13 05:14:03.290096 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-13 05:14:03.290106 | orchestrator | Friday 13 February 2026 05:13:49 +0000 (0:00:04.281) 0:03:14.879 ******* 2026-02-13 05:14:03.290116 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:03.290127 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:03.290136 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:03.290146 | orchestrator | 2026-02-13 05:14:03.290156 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-13 05:14:03.290165 | orchestrator | Friday 13 February 2026 05:13:51 +0000 (0:00:02.261) 0:03:17.140 ******* 2026-02-13 05:14:03.290175 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:03.290185 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:03.290194 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:03.290204 | orchestrator | 2026-02-13 05:14:03.290214 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-13 05:14:03.290225 | orchestrator | Friday 13 February 2026 05:13:55 +0000 (0:00:03.747) 0:03:20.887 ******* 2026-02-13 05:14:03.290237 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:03.290248 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:03.290259 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:03.290270 | orchestrator | 2026-02-13 05:14:03.290281 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-13 05:14:03.290292 | orchestrator | Friday 13 February 2026 05:13:57 +0000 (0:00:01.547) 0:03:22.434 ******* 2026-02-13 05:14:03.290304 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:14:03.290315 | orchestrator | 2026-02-13 05:14:03.290327 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-13 05:14:03.290364 | orchestrator | Friday 13 February 2026 05:13:58 +0000 (0:00:01.581) 0:03:24.016 ******* 2026-02-13 05:14:03.290391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:03.290412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:19.436291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:19.436431 | orchestrator | 2026-02-13 05:14:19.436442 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-13 05:14:19.436452 | orchestrator | Friday 13 February 2026 05:14:03 +0000 (0:00:04.610) 0:03:28.626 ******* 2026-02-13 05:14:19.436462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:14:19.436470 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:19.436487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:14:19.436513 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:19.436534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:14:19.436542 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:19.436550 | orchestrator | 2026-02-13 05:14:19.436557 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-13 05:14:19.436565 | orchestrator | Friday 13 February 2026 05:14:04 +0000 (0:00:01.679) 0:03:30.305 ******* 2026-02-13 05:14:19.436574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436593 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:19.436614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436629 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:19.436637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:14:19.436652 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:19.436659 | orchestrator | 2026-02-13 05:14:19.436666 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-13 05:14:19.436673 | orchestrator | Friday 13 February 2026 05:14:06 +0000 (0:00:01.466) 0:03:31.772 ******* 2026-02-13 05:14:19.436680 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:19.436687 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:19.436693 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:19.436700 | orchestrator | 2026-02-13 05:14:19.436707 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-13 05:14:19.436714 | orchestrator | Friday 13 February 2026 05:14:08 +0000 (0:00:02.215) 0:03:33.987 ******* 2026-02-13 05:14:19.436720 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:19.436726 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:19.436738 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:19.436745 | orchestrator | 2026-02-13 05:14:19.436752 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-13 05:14:19.436759 | orchestrator | Friday 13 February 2026 05:14:11 +0000 (0:00:02.858) 0:03:36.846 ******* 2026-02-13 05:14:19.436765 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:19.436772 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:19.436778 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:19.436785 | orchestrator | 2026-02-13 05:14:19.436792 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-13 05:14:19.436797 | orchestrator | Friday 13 February 2026 05:14:12 +0000 (0:00:01.400) 0:03:38.247 ******* 2026-02-13 05:14:19.436803 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:14:19.436809 | orchestrator | 2026-02-13 05:14:19.436815 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-13 05:14:19.436822 | orchestrator | Friday 13 February 2026 05:14:14 +0000 (0:00:01.906) 0:03:40.153 ******* 2026-02-13 05:14:19.436846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 05:14:21.147060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 05:14:21.147221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-13 05:14:21.147245 | orchestrator | 2026-02-13 05:14:21.147268 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-13 05:14:21.147437 | orchestrator | Friday 13 February 2026 05:14:19 +0000 (0:00:04.622) 0:03:44.775 ******* 2026-02-13 05:14:21.147499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 05:14:21.147524 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:21.147555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 05:14:29.840412 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:29.840551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-13 05:14:29.840583 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:29.840605 | orchestrator | 2026-02-13 05:14:29.840626 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-13 05:14:29.840669 | orchestrator | Friday 13 February 2026 05:14:21 +0000 (0:00:01.711) 0:03:46.487 ******* 2026-02-13 05:14:29.840690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.840714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.840736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.840825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.840851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 05:14:29.840873 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:29.840917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.840939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.840960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.840980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.841007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 05:14:29.841028 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:29.841047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.841068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.841087 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-13 05:14:29.841107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-13 05:14:29.841124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-13 05:14:29.841155 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:29.841173 | orchestrator | 2026-02-13 05:14:29.841192 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-13 05:14:29.841212 | orchestrator | Friday 13 February 2026 05:14:23 +0000 (0:00:01.936) 0:03:48.423 ******* 2026-02-13 05:14:29.841230 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:29.841250 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:29.841269 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:29.841319 | orchestrator | 2026-02-13 05:14:29.841337 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-13 05:14:29.841356 | orchestrator | Friday 13 February 2026 05:14:25 +0000 (0:00:02.340) 0:03:50.764 ******* 2026-02-13 05:14:29.841374 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:29.841394 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:29.841413 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:29.841432 | orchestrator | 2026-02-13 05:14:29.841450 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-13 05:14:29.841469 | orchestrator | Friday 13 February 2026 05:14:28 +0000 (0:00:02.826) 0:03:53.591 ******* 2026-02-13 05:14:29.841489 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:29.841508 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:29.841525 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:29.841543 | orchestrator | 2026-02-13 05:14:29.841563 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-13 05:14:29.841581 | orchestrator | Friday 13 February 2026 05:14:29 +0000 (0:00:01.354) 0:03:54.945 ******* 2026-02-13 05:14:29.841616 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:39.666241 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:39.666367 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:39.666383 | orchestrator | 2026-02-13 05:14:39.666395 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-13 05:14:39.666406 | orchestrator | Friday 13 February 2026 05:14:30 +0000 (0:00:01.325) 0:03:56.271 ******* 2026-02-13 05:14:39.666415 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:14:39.666425 | orchestrator | 2026-02-13 05:14:39.666435 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-13 05:14:39.666444 | orchestrator | Friday 13 February 2026 05:14:32 +0000 (0:00:01.980) 0:03:58.251 ******* 2026-02-13 05:14:39.666476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-13 05:14:39.666494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:39.666526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:39.666537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-13 05:14:39.666566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:39.666577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:39.666593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-13 05:14:39.666612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:39.666622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:39.666633 | orchestrator | 2026-02-13 05:14:39.666643 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-13 05:14:39.666653 | orchestrator | Friday 13 February 2026 05:14:37 +0000 (0:00:04.867) 0:04:03.118 ******* 2026-02-13 05:14:39.666671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-13 05:14:41.316342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:41.316467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:41.316508 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:41.316525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-13 05:14:41.316538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:41.316550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:41.316562 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:41.316598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-13 05:14:41.316612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-13 05:14:41.316636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-13 05:14:41.316647 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:41.316659 | orchestrator | 2026-02-13 05:14:41.316671 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-13 05:14:41.316683 | orchestrator | Friday 13 February 2026 05:14:39 +0000 (0:00:01.886) 0:04:05.005 ******* 2026-02-13 05:14:41.316696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316723 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:41.316734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316757 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:41.316768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-13 05:14:41.316791 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:41.316802 | orchestrator | 2026-02-13 05:14:41.316813 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-13 05:14:41.316832 | orchestrator | Friday 13 February 2026 05:14:41 +0000 (0:00:01.649) 0:04:06.654 ******* 2026-02-13 05:14:55.999178 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:55.999288 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:55.999295 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:55.999313 | orchestrator | 2026-02-13 05:14:55.999319 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-13 05:14:55.999324 | orchestrator | Friday 13 February 2026 05:14:43 +0000 (0:00:02.221) 0:04:08.876 ******* 2026-02-13 05:14:55.999328 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:14:55.999332 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:14:55.999335 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:14:55.999339 | orchestrator | 2026-02-13 05:14:55.999343 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-13 05:14:55.999347 | orchestrator | Friday 13 February 2026 05:14:46 +0000 (0:00:02.852) 0:04:11.728 ******* 2026-02-13 05:14:55.999351 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:14:55.999356 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:14:55.999371 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:14:55.999375 | orchestrator | 2026-02-13 05:14:55.999379 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-13 05:14:55.999383 | orchestrator | Friday 13 February 2026 05:14:47 +0000 (0:00:01.337) 0:04:13.065 ******* 2026-02-13 05:14:55.999387 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:14:55.999391 | orchestrator | 2026-02-13 05:14:55.999395 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-13 05:14:55.999398 | orchestrator | Friday 13 February 2026 05:14:49 +0000 (0:00:01.732) 0:04:14.798 ******* 2026-02-13 05:14:55.999405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:55.999413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:14:55.999418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:55.999440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:14:55.999448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:14:55.999452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:14:55.999456 | orchestrator | 2026-02-13 05:14:55.999460 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-13 05:14:55.999465 | orchestrator | Friday 13 February 2026 05:14:54 +0000 (0:00:04.809) 0:04:19.608 ******* 2026-02-13 05:14:55.999469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:14:55.999479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:15:08.353991 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:08.354180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:15:08.354335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:15:08.354349 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:08.354362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:15:08.354397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:15:08.354410 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:08.354421 | orchestrator | 2026-02-13 05:15:08.354434 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-13 05:15:08.354446 | orchestrator | Friday 13 February 2026 05:14:55 +0000 (0:00:01.730) 0:04:21.338 ******* 2026-02-13 05:15:08.354476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354504 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:08.354523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354551 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:08.354564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:08.354590 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:08.354603 | orchestrator | 2026-02-13 05:15:08.354617 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-13 05:15:08.354636 | orchestrator | Friday 13 February 2026 05:14:57 +0000 (0:00:01.868) 0:04:23.207 ******* 2026-02-13 05:15:08.354655 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:15:08.354673 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:15:08.354691 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:15:08.354710 | orchestrator | 2026-02-13 05:15:08.354728 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-13 05:15:08.354747 | orchestrator | Friday 13 February 2026 05:15:00 +0000 (0:00:02.259) 0:04:25.467 ******* 2026-02-13 05:15:08.354764 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:15:08.354782 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:15:08.354800 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:15:08.354818 | orchestrator | 2026-02-13 05:15:08.354838 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-13 05:15:08.354857 | orchestrator | Friday 13 February 2026 05:15:02 +0000 (0:00:02.752) 0:04:28.219 ******* 2026-02-13 05:15:08.354875 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:15:08.354906 | orchestrator | 2026-02-13 05:15:08.354917 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-13 05:15:08.354928 | orchestrator | Friday 13 February 2026 05:15:04 +0000 (0:00:02.033) 0:04:30.252 ******* 2026-02-13 05:15:08.354942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:15:08.354956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:08.354987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:15:10.007326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:15:10.007446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:10.007503 | orchestrator | 2026-02-13 05:15:10.007515 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-13 05:15:10.007526 | orchestrator | Friday 13 February 2026 05:15:09 +0000 (0:00:04.517) 0:04:34.770 ******* 2026-02-13 05:15:10.007538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:15:10.007556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066416 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:13.066424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:15:13.066867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066916 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:13.066922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:15:13.066935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-13 05:15:13.066952 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:13.066957 | orchestrator | 2026-02-13 05:15:13.066962 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-13 05:15:13.066968 | orchestrator | Friday 13 February 2026 05:15:11 +0000 (0:00:01.682) 0:04:36.452 ******* 2026-02-13 05:15:13.066974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:13.066981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:13.066988 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:13.066993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:13.067001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:28.097533 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:28.097633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:28.097668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:15:28.097680 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:28.097689 | orchestrator | 2026-02-13 05:15:28.097698 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-13 05:15:28.097707 | orchestrator | Friday 13 February 2026 05:15:13 +0000 (0:00:01.943) 0:04:38.395 ******* 2026-02-13 05:15:28.097716 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:15:28.097727 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:15:28.097741 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:15:28.097754 | orchestrator | 2026-02-13 05:15:28.097768 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-13 05:15:28.097781 | orchestrator | Friday 13 February 2026 05:15:15 +0000 (0:00:02.293) 0:04:40.689 ******* 2026-02-13 05:15:28.097794 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:15:28.097807 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:15:28.097822 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:15:28.097835 | orchestrator | 2026-02-13 05:15:28.097848 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-13 05:15:28.097859 | orchestrator | Friday 13 February 2026 05:15:18 +0000 (0:00:02.830) 0:04:43.520 ******* 2026-02-13 05:15:28.097867 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:15:28.097875 | orchestrator | 2026-02-13 05:15:28.097884 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-13 05:15:28.097892 | orchestrator | Friday 13 February 2026 05:15:20 +0000 (0:00:02.432) 0:04:45.953 ******* 2026-02-13 05:15:28.097900 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:15:28.097908 | orchestrator | 2026-02-13 05:15:28.097916 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-13 05:15:28.097924 | orchestrator | Friday 13 February 2026 05:15:24 +0000 (0:00:04.021) 0:04:49.974 ******* 2026-02-13 05:15:28.097950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:28.097985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:28.097995 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:28.098005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:28.098067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:28.098080 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:28.098103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:31.553349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:31.553486 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:31.553514 | orchestrator | 2026-02-13 05:15:31.553534 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-13 05:15:31.553552 | orchestrator | Friday 13 February 2026 05:15:28 +0000 (0:00:03.459) 0:04:53.434 ******* 2026-02-13 05:15:31.553595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:31.553648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:31.553667 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:31.553713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:31.553739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:31.553757 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:31.553775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:15:31.553815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-13 05:15:46.629784 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:46.629889 | orchestrator | 2026-02-13 05:15:46.629903 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-13 05:15:46.629914 | orchestrator | Friday 13 February 2026 05:15:31 +0000 (0:00:03.459) 0:04:56.893 ******* 2026-02-13 05:15:46.629927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.629942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.629953 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:46.629991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.630003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.630085 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:46.630127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.630145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-13 05:15:46.630156 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:46.630166 | orchestrator | 2026-02-13 05:15:46.630176 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-13 05:15:46.630185 | orchestrator | Friday 13 February 2026 05:15:34 +0000 (0:00:03.249) 0:05:00.143 ******* 2026-02-13 05:15:46.630195 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:15:46.630223 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:15:46.630233 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:15:46.630243 | orchestrator | 2026-02-13 05:15:46.630253 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-13 05:15:46.630262 | orchestrator | Friday 13 February 2026 05:15:37 +0000 (0:00:02.874) 0:05:03.017 ******* 2026-02-13 05:15:46.630272 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:46.630281 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:46.630291 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:46.630301 | orchestrator | 2026-02-13 05:15:46.630311 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-13 05:15:46.630320 | orchestrator | Friday 13 February 2026 05:15:40 +0000 (0:00:02.577) 0:05:05.594 ******* 2026-02-13 05:15:46.630330 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:15:46.630339 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:15:46.630349 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:15:46.630358 | orchestrator | 2026-02-13 05:15:46.630368 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-13 05:15:46.630377 | orchestrator | Friday 13 February 2026 05:15:41 +0000 (0:00:01.346) 0:05:06.941 ******* 2026-02-13 05:15:46.630387 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:15:46.630396 | orchestrator | 2026-02-13 05:15:46.630406 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-13 05:15:46.630415 | orchestrator | Friday 13 February 2026 05:15:43 +0000 (0:00:02.095) 0:05:09.037 ******* 2026-02-13 05:15:46.630432 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:15:46.630451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:15:46.630462 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:15:46.630472 | orchestrator | 2026-02-13 05:15:46.630482 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-13 05:15:46.630492 | orchestrator | Friday 13 February 2026 05:15:46 +0000 (0:00:02.484) 0:05:11.522 ******* 2026-02-13 05:15:46.630508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:16:00.687843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:16:00.687947 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:00.687982 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:00.688007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:16:00.688017 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:00.688026 | orchestrator | 2026-02-13 05:16:00.688035 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-13 05:16:00.688045 | orchestrator | Friday 13 February 2026 05:15:47 +0000 (0:00:01.602) 0:05:13.124 ******* 2026-02-13 05:16:00.688056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 05:16:00.688092 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:00.688101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 05:16:00.688110 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:00.688119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-13 05:16:00.688129 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:00.688138 | orchestrator | 2026-02-13 05:16:00.688147 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-13 05:16:00.688156 | orchestrator | Friday 13 February 2026 05:15:49 +0000 (0:00:01.404) 0:05:14.529 ******* 2026-02-13 05:16:00.688164 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:00.688173 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:00.688181 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:00.688190 | orchestrator | 2026-02-13 05:16:00.688199 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-13 05:16:00.688207 | orchestrator | Friday 13 February 2026 05:15:50 +0000 (0:00:01.436) 0:05:15.965 ******* 2026-02-13 05:16:00.688216 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:00.688225 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:00.688233 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:00.688242 | orchestrator | 2026-02-13 05:16:00.688250 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-13 05:16:00.688259 | orchestrator | Friday 13 February 2026 05:15:52 +0000 (0:00:02.128) 0:05:18.094 ******* 2026-02-13 05:16:00.688268 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:00.688276 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:00.688285 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:00.688293 | orchestrator | 2026-02-13 05:16:00.688302 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-13 05:16:00.688311 | orchestrator | Friday 13 February 2026 05:15:54 +0000 (0:00:01.692) 0:05:19.787 ******* 2026-02-13 05:16:00.688319 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:16:00.688328 | orchestrator | 2026-02-13 05:16:00.688344 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-13 05:16:00.688353 | orchestrator | Friday 13 February 2026 05:15:56 +0000 (0:00:01.994) 0:05:21.781 ******* 2026-02-13 05:16:00.688379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:00.688399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.688412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:00.688425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:00.688451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.814683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:00.814796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:00.814832 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:00.814847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:00.814860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.814893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:00.814927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:00.814943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:00.814952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.814959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:00.814966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.814983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:00.928145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:00.928250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:00.928268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.928303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:00.928317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:00.928348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:00.928377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:00.928423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:00.928436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:00.928457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:00.928477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:02.210412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:02.210592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:02.210620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:02.210652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:02.210682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:02.210701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:02.210713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:02.210724 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:02.210742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:02.210753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:02.210764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:02.210787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.240044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:03.240205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.240223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.240265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:03.240282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:03.240294 | orchestrator | 2026-02-13 05:16:03.240308 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-13 05:16:03.240320 | orchestrator | Friday 13 February 2026 05:16:02 +0000 (0:00:05.769) 0:05:27.550 ******* 2026-02-13 05:16:03.240372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:03.240396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.240427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:03.240447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:03.240487 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.327009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.327140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.327183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:03.327198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:03.327213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:03.327261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.327277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:03.327297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.327309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.327321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:03.327333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.327359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:03.408128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:03.408225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.408242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:03.408255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.408268 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:03.408304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:03.408349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:03.408395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:03.408416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:03.408435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:03.408461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-13 05:16:03.408503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:04.611640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-13 05:16:04.611742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:04.611755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:04.611766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:04.611788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:04.611821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:04.611911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:04.611930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:04.611943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-13 05:16:04.611955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:04.611973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:04.611995 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:04.612029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:18.613522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-13 05:16:18.613617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-13 05:16:18.613629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-13 05:16:18.613640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-13 05:16:18.613683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-13 05:16:18.613691 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:18.613699 | orchestrator | 2026-02-13 05:16:18.613707 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-13 05:16:18.613714 | orchestrator | Friday 13 February 2026 05:16:04 +0000 (0:00:02.397) 0:05:29.948 ******* 2026-02-13 05:16:18.613722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613752 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:18.613758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613772 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:18.613778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:18.613791 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:18.613797 | orchestrator | 2026-02-13 05:16:18.613803 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-13 05:16:18.613809 | orchestrator | Friday 13 February 2026 05:16:07 +0000 (0:00:02.770) 0:05:32.718 ******* 2026-02-13 05:16:18.613831 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:18.613839 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:18.613845 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:18.613858 | orchestrator | 2026-02-13 05:16:18.613864 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-13 05:16:18.613871 | orchestrator | Friday 13 February 2026 05:16:09 +0000 (0:00:02.277) 0:05:34.996 ******* 2026-02-13 05:16:18.613876 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:18.613882 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:18.613887 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:18.613892 | orchestrator | 2026-02-13 05:16:18.613898 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-13 05:16:18.613903 | orchestrator | Friday 13 February 2026 05:16:12 +0000 (0:00:02.658) 0:05:37.655 ******* 2026-02-13 05:16:18.613917 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:16:18.613923 | orchestrator | 2026-02-13 05:16:18.613929 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-13 05:16:18.613935 | orchestrator | Friday 13 February 2026 05:16:14 +0000 (0:00:02.111) 0:05:39.766 ******* 2026-02-13 05:16:18.613946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:16:18.613962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:16:35.025188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:16:35.025295 | orchestrator | 2026-02-13 05:16:35.025309 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-13 05:16:35.025318 | orchestrator | Friday 13 February 2026 05:16:18 +0000 (0:00:04.183) 0:05:43.950 ******* 2026-02-13 05:16:35.025327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:16:35.025354 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:35.025377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:16:35.025387 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:35.025412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:16:35.025421 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:35.025428 | orchestrator | 2026-02-13 05:16:35.025436 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-13 05:16:35.025444 | orchestrator | Friday 13 February 2026 05:16:20 +0000 (0:00:01.517) 0:05:45.468 ******* 2026-02-13 05:16:35.025454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025481 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:35.025490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025507 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:35.025516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:16:35.025533 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:35.025542 | orchestrator | 2026-02-13 05:16:35.025551 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-13 05:16:35.025559 | orchestrator | Friday 13 February 2026 05:16:21 +0000 (0:00:01.489) 0:05:46.958 ******* 2026-02-13 05:16:35.025567 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:35.025577 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:35.025585 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:35.025594 | orchestrator | 2026-02-13 05:16:35.025602 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-13 05:16:35.025611 | orchestrator | Friday 13 February 2026 05:16:23 +0000 (0:00:02.176) 0:05:49.134 ******* 2026-02-13 05:16:35.025619 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:35.025632 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:35.025641 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:35.025649 | orchestrator | 2026-02-13 05:16:35.025658 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-13 05:16:35.025666 | orchestrator | Friday 13 February 2026 05:16:26 +0000 (0:00:03.106) 0:05:52.240 ******* 2026-02-13 05:16:35.025675 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:16:35.025683 | orchestrator | 2026-02-13 05:16:35.025691 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-13 05:16:35.025699 | orchestrator | Friday 13 February 2026 05:16:29 +0000 (0:00:02.453) 0:05:54.694 ******* 2026-02-13 05:16:35.025714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.123494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.123529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.123587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.123600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:16:36.123622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760656 | orchestrator | 2026-02-13 05:16:36.760675 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-13 05:16:36.760687 | orchestrator | Friday 13 February 2026 05:16:36 +0000 (0:00:06.773) 0:06:01.468 ******* 2026-02-13 05:16:36.760704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:36.760737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:36.760751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760817 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:36.760831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:36.760849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:36.760862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:36.760893 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:36.760913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:54.635203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:16:54.635366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-13 05:16:54.635431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-13 05:16:54.635455 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:54.635505 | orchestrator | 2026-02-13 05:16:54.635526 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-13 05:16:54.635546 | orchestrator | Friday 13 February 2026 05:16:37 +0000 (0:00:01.715) 0:06:03.183 ******* 2026-02-13 05:16:54.635566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635634 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:16:54.635646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635710 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:16:54.635723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:16:54.635780 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:16:54.635793 | orchestrator | 2026-02-13 05:16:54.635806 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-13 05:16:54.635828 | orchestrator | Friday 13 February 2026 05:16:39 +0000 (0:00:02.025) 0:06:05.209 ******* 2026-02-13 05:16:54.635842 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:54.635855 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:54.635867 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:54.635879 | orchestrator | 2026-02-13 05:16:54.635892 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-13 05:16:54.635905 | orchestrator | Friday 13 February 2026 05:16:42 +0000 (0:00:02.221) 0:06:07.430 ******* 2026-02-13 05:16:54.635918 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:16:54.635930 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:16:54.635942 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:16:54.635981 | orchestrator | 2026-02-13 05:16:54.635994 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-13 05:16:54.636007 | orchestrator | Friday 13 February 2026 05:16:44 +0000 (0:00:02.911) 0:06:10.342 ******* 2026-02-13 05:16:54.636020 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:16:54.636032 | orchestrator | 2026-02-13 05:16:54.636044 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-13 05:16:54.636058 | orchestrator | Friday 13 February 2026 05:16:47 +0000 (0:00:02.730) 0:06:13.072 ******* 2026-02-13 05:16:54.636070 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-13 05:16:54.636084 | orchestrator | 2026-02-13 05:16:54.636097 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-13 05:16:54.636110 | orchestrator | Friday 13 February 2026 05:16:49 +0000 (0:00:01.643) 0:06:14.716 ******* 2026-02-13 05:16:54.636124 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 05:16:54.636140 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 05:16:54.636167 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-13 05:17:13.449790 | orchestrator | 2026-02-13 05:17:13.449915 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-13 05:17:13.449959 | orchestrator | Friday 13 February 2026 05:16:54 +0000 (0:00:05.248) 0:06:19.964 ******* 2026-02-13 05:17:13.449974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450011 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:13.450096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450108 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:13.450133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450146 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:13.450157 | orchestrator | 2026-02-13 05:17:13.450168 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-13 05:17:13.450179 | orchestrator | Friday 13 February 2026 05:16:56 +0000 (0:00:02.372) 0:06:22.337 ******* 2026-02-13 05:17:13.450192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450219 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:13.450229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450252 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:13.450263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-13 05:17:13.450285 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:13.450296 | orchestrator | 2026-02-13 05:17:13.450310 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 05:17:13.450322 | orchestrator | Friday 13 February 2026 05:16:59 +0000 (0:00:02.388) 0:06:24.725 ******* 2026-02-13 05:17:13.450335 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:13.450348 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:13.450360 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:13.450372 | orchestrator | 2026-02-13 05:17:13.450385 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 05:17:13.450397 | orchestrator | Friday 13 February 2026 05:17:03 +0000 (0:00:03.771) 0:06:28.496 ******* 2026-02-13 05:17:13.450420 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:13.450432 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:13.450465 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:13.450478 | orchestrator | 2026-02-13 05:17:13.450492 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-13 05:17:13.450505 | orchestrator | Friday 13 February 2026 05:17:06 +0000 (0:00:03.805) 0:06:32.301 ******* 2026-02-13 05:17:13.450518 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-13 05:17:13.450532 | orchestrator | 2026-02-13 05:17:13.450545 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-13 05:17:13.450557 | orchestrator | Friday 13 February 2026 05:17:08 +0000 (0:00:01.637) 0:06:33.939 ******* 2026-02-13 05:17:13.450572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450585 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:13.450603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450616 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:13.450629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450642 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:13.450655 | orchestrator | 2026-02-13 05:17:13.450668 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-13 05:17:13.450681 | orchestrator | Friday 13 February 2026 05:17:10 +0000 (0:00:02.365) 0:06:36.305 ******* 2026-02-13 05:17:13.450692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450703 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:13.450715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:13.450733 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:13.450750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-13 05:17:46.831269 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:46.831394 | orchestrator | 2026-02-13 05:17:46.831413 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-13 05:17:46.831427 | orchestrator | Friday 13 February 2026 05:17:13 +0000 (0:00:02.472) 0:06:38.777 ******* 2026-02-13 05:17:46.831440 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:46.831451 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:46.831462 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:46.831473 | orchestrator | 2026-02-13 05:17:46.831485 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 05:17:46.831496 | orchestrator | Friday 13 February 2026 05:17:15 +0000 (0:00:02.430) 0:06:41.208 ******* 2026-02-13 05:17:46.831507 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:46.831519 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:46.831531 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:46.831543 | orchestrator | 2026-02-13 05:17:46.831554 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 05:17:46.831565 | orchestrator | Friday 13 February 2026 05:17:19 +0000 (0:00:03.473) 0:06:44.681 ******* 2026-02-13 05:17:46.831576 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:46.831587 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:46.831598 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:46.831608 | orchestrator | 2026-02-13 05:17:46.831624 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-13 05:17:46.831642 | orchestrator | Friday 13 February 2026 05:17:23 +0000 (0:00:03.850) 0:06:48.532 ******* 2026-02-13 05:17:46.831661 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-13 05:17:46.831681 | orchestrator | 2026-02-13 05:17:46.831700 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-13 05:17:46.831740 | orchestrator | Friday 13 February 2026 05:17:25 +0000 (0:00:02.327) 0:06:50.859 ******* 2026-02-13 05:17:46.831764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.831782 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:46.831797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.831831 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:46.831846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.831912 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:46.831944 | orchestrator | 2026-02-13 05:17:46.831966 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-13 05:17:46.831987 | orchestrator | Friday 13 February 2026 05:17:27 +0000 (0:00:02.372) 0:06:53.231 ******* 2026-02-13 05:17:46.832007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.832026 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:46.832073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.832094 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:46.832115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-13 05:17:46.832136 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:46.832154 | orchestrator | 2026-02-13 05:17:46.832174 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-13 05:17:46.832206 | orchestrator | Friday 13 February 2026 05:17:30 +0000 (0:00:02.519) 0:06:55.751 ******* 2026-02-13 05:17:46.832227 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:46.832246 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:46.832266 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:17:46.832284 | orchestrator | 2026-02-13 05:17:46.832303 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-13 05:17:46.832323 | orchestrator | Friday 13 February 2026 05:17:32 +0000 (0:00:02.497) 0:06:58.249 ******* 2026-02-13 05:17:46.832352 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:46.832364 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:46.832375 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:46.832386 | orchestrator | 2026-02-13 05:17:46.832397 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-13 05:17:46.832407 | orchestrator | Friday 13 February 2026 05:17:36 +0000 (0:00:03.427) 0:07:01.677 ******* 2026-02-13 05:17:46.832421 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:17:46.832440 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:17:46.832482 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:17:46.832500 | orchestrator | 2026-02-13 05:17:46.832517 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-13 05:17:46.832535 | orchestrator | Friday 13 February 2026 05:17:40 +0000 (0:00:04.170) 0:07:05.848 ******* 2026-02-13 05:17:46.832552 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:17:46.832569 | orchestrator | 2026-02-13 05:17:46.832585 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-13 05:17:46.832600 | orchestrator | Friday 13 February 2026 05:17:42 +0000 (0:00:02.416) 0:07:08.264 ******* 2026-02-13 05:17:46.832618 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 05:17:46.832638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:46.832673 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 05:17:48.886323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:48.886505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:17:48.886517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:17:48.886577 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-13 05:17:48.886596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:48.886608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:17:48.886645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:17:48.886666 | orchestrator | 2026-02-13 05:17:48.886687 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-13 05:17:48.886707 | orchestrator | Friday 13 February 2026 05:17:47 +0000 (0:00:05.059) 0:07:13.323 ******* 2026-02-13 05:17:48.886740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 05:17:50.033572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:50.033675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:17:50.033693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:17:50.033706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:17:50.033718 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:17:50.033733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 05:17:50.033775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:50.033812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:17:50.033826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:17:50.033838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:17:50.033923 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:17:50.033943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-13 05:17:50.033956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-13 05:17:50.033985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-13 05:18:06.448559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-13 05:18:06.448673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-13 05:18:06.448691 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:06.448706 | orchestrator | 2026-02-13 05:18:06.448718 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-13 05:18:06.448731 | orchestrator | Friday 13 February 2026 05:17:50 +0000 (0:00:02.052) 0:07:15.376 ******* 2026-02-13 05:18:06.448750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448788 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:06.448805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448903 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:06.448920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-13 05:18:06.448980 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:06.448998 | orchestrator | 2026-02-13 05:18:06.449012 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-13 05:18:06.449028 | orchestrator | Friday 13 February 2026 05:17:52 +0000 (0:00:02.078) 0:07:17.454 ******* 2026-02-13 05:18:06.449044 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:18:06.449062 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:18:06.449079 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:18:06.449094 | orchestrator | 2026-02-13 05:18:06.449111 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-13 05:18:06.449128 | orchestrator | Friday 13 February 2026 05:17:54 +0000 (0:00:02.289) 0:07:19.744 ******* 2026-02-13 05:18:06.449143 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:18:06.449159 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:18:06.449175 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:18:06.449189 | orchestrator | 2026-02-13 05:18:06.449204 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-13 05:18:06.449221 | orchestrator | Friday 13 February 2026 05:17:57 +0000 (0:00:02.877) 0:07:22.621 ******* 2026-02-13 05:18:06.449238 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:18:06.449256 | orchestrator | 2026-02-13 05:18:06.449272 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-13 05:18:06.449288 | orchestrator | Friday 13 February 2026 05:17:59 +0000 (0:00:02.377) 0:07:24.998 ******* 2026-02-13 05:18:06.449343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:06.449361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:06.449372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:06.449395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:18:06.449430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:18:10.277989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:18:10.278173 | orchestrator | 2026-02-13 05:18:10.278197 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-13 05:18:10.278279 | orchestrator | Friday 13 February 2026 05:18:06 +0000 (0:00:06.784) 0:07:31.783 ******* 2026-02-13 05:18:10.278302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:10.278338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:18:10.278358 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:10.278400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:10.278419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:18:10.278449 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:10.278468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:10.278488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:18:10.278505 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:10.278523 | orchestrator | 2026-02-13 05:18:10.278542 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-13 05:18:10.278559 | orchestrator | Friday 13 February 2026 05:18:08 +0000 (0:00:02.027) 0:07:33.810 ******* 2026-02-13 05:18:10.278579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:10.278607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.062916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.063028 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:19.063045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:19.063083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.063097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.063108 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:19.063119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:19.063131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.063141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-13 05:18:19.063152 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:19.063164 | orchestrator | 2026-02-13 05:18:19.063175 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-13 05:18:19.063229 | orchestrator | Friday 13 February 2026 05:18:10 +0000 (0:00:01.811) 0:07:35.622 ******* 2026-02-13 05:18:19.063242 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:19.063253 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:19.063264 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:19.063275 | orchestrator | 2026-02-13 05:18:19.063286 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-13 05:18:19.063296 | orchestrator | Friday 13 February 2026 05:18:11 +0000 (0:00:01.466) 0:07:37.088 ******* 2026-02-13 05:18:19.063307 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:19.063318 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:19.063328 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:19.063339 | orchestrator | 2026-02-13 05:18:19.063350 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-13 05:18:19.063362 | orchestrator | Friday 13 February 2026 05:18:13 +0000 (0:00:02.216) 0:07:39.305 ******* 2026-02-13 05:18:19.063373 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:18:19.063385 | orchestrator | 2026-02-13 05:18:19.063395 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-13 05:18:19.063411 | orchestrator | Friday 13 February 2026 05:18:16 +0000 (0:00:02.551) 0:07:41.857 ******* 2026-02-13 05:18:19.063451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-13 05:18:19.063479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:19.063495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:19.063509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:19.063523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:19.063544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-13 05:18:19.063559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:19.063590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:20.903570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-13 05:18:20.903615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:20.903637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:20.903744 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:20.903758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:20.903778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:20.903898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:20.903922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:23.185585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:23.185688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.185722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.185758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:23.185774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:18:23.185883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:23.185898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.185910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.185929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:23.185953 | orchestrator | 2026-02-13 05:18:23.185967 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-13 05:18:23.185979 | orchestrator | Friday 13 February 2026 05:18:22 +0000 (0:00:05.733) 0:07:47.590 ******* 2026-02-13 05:18:23.185992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-13 05:18:23.186005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:23.186099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.353933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.354128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:23.354181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:23.354228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-13 05:18:23.354261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:23.354273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.354285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:23.354307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.354318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.354328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:23.354339 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:23.354351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:23.354362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:23.354381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:24.501558 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:24.501655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:24.501668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:24.501678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:24.501689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-13 05:18:24.501701 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:24.501730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-13 05:18:24.501763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:24.501778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:24.501853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-13 05:18:24.501866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:18:24.501876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-13 05:18:24.501899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:37.009195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:18:37.009346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-13 05:18:37.009366 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:37.009380 | orchestrator | 2026-02-13 05:18:37.010245 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-13 05:18:37.010276 | orchestrator | Friday 13 February 2026 05:18:24 +0000 (0:00:02.251) 0:07:49.841 ******* 2026-02-13 05:18:37.010289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:37.010369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010480 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:37.010492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-13 05:18:37.010524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-13 05:18:37.010547 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:37.010558 | orchestrator | 2026-02-13 05:18:37.010620 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-13 05:18:37.010635 | orchestrator | Friday 13 February 2026 05:18:26 +0000 (0:00:01.884) 0:07:51.726 ******* 2026-02-13 05:18:37.010647 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:37.010657 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:37.010668 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:37.010682 | orchestrator | 2026-02-13 05:18:37.010701 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-13 05:18:37.010719 | orchestrator | Friday 13 February 2026 05:18:28 +0000 (0:00:01.914) 0:07:53.640 ******* 2026-02-13 05:18:37.010737 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:37.010755 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:37.010855 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:37.010875 | orchestrator | 2026-02-13 05:18:37.010895 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-13 05:18:37.010914 | orchestrator | Friday 13 February 2026 05:18:30 +0000 (0:00:02.214) 0:07:55.855 ******* 2026-02-13 05:18:37.010933 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:18:37.010964 | orchestrator | 2026-02-13 05:18:37.010975 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-13 05:18:37.010986 | orchestrator | Friday 13 February 2026 05:18:32 +0000 (0:00:02.398) 0:07:58.254 ******* 2026-02-13 05:18:37.010999 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:18:37.011029 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:18:54.553606 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:18:54.553783 | orchestrator | 2026-02-13 05:18:54.553804 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-13 05:18:54.553818 | orchestrator | Friday 13 February 2026 05:18:36 +0000 (0:00:04.086) 0:08:02.341 ******* 2026-02-13 05:18:54.553831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:18:54.553865 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:54.553878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:18:54.553890 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:54.553927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:18:54.553941 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:54.553952 | orchestrator | 2026-02-13 05:18:54.553964 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-13 05:18:54.553975 | orchestrator | Friday 13 February 2026 05:18:38 +0000 (0:00:01.447) 0:08:03.788 ******* 2026-02-13 05:18:54.553987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 05:18:54.553999 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:54.554010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 05:18:54.554080 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:54.554092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-13 05:18:54.554103 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:54.554114 | orchestrator | 2026-02-13 05:18:54.554125 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-13 05:18:54.554146 | orchestrator | Friday 13 February 2026 05:18:39 +0000 (0:00:01.510) 0:08:05.299 ******* 2026-02-13 05:18:54.554157 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:54.554177 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:54.554188 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:54.554198 | orchestrator | 2026-02-13 05:18:54.554210 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-13 05:18:54.554221 | orchestrator | Friday 13 February 2026 05:18:41 +0000 (0:00:01.979) 0:08:07.279 ******* 2026-02-13 05:18:54.554232 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:54.554243 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:54.554254 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:18:54.554264 | orchestrator | 2026-02-13 05:18:54.554275 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-13 05:18:54.554286 | orchestrator | Friday 13 February 2026 05:18:44 +0000 (0:00:02.311) 0:08:09.590 ******* 2026-02-13 05:18:54.554297 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:18:54.554308 | orchestrator | 2026-02-13 05:18:54.554318 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-13 05:18:54.554329 | orchestrator | Friday 13 February 2026 05:18:46 +0000 (0:00:02.453) 0:08:12.043 ******* 2026-02-13 05:18:54.554341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-13 05:18:54.554355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-13 05:18:54.554383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-13 05:18:56.304622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:18:56.304843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:18:56.304918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-13 05:18:56.304936 | orchestrator | 2026-02-13 05:18:56.304966 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-13 05:18:56.304979 | orchestrator | Friday 13 February 2026 05:18:54 +0000 (0:00:07.845) 0:08:19.888 ******* 2026-02-13 05:18:56.305013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-13 05:18:56.305050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:18:56.305064 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:18:56.305077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-13 05:18:56.305097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:18:56.305118 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:18:56.305141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-13 05:19:17.615511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-13 05:19:17.615634 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.615658 | orchestrator | 2026-02-13 05:19:17.615673 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-13 05:19:17.615689 | orchestrator | Friday 13 February 2026 05:18:56 +0000 (0:00:01.753) 0:08:21.642 ******* 2026-02-13 05:19:17.615747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615813 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.615826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615914 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.615929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-13 05:19:17.615964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-13 05:19:17.615981 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.615989 | orchestrator | 2026-02-13 05:19:17.615997 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-13 05:19:17.616005 | orchestrator | Friday 13 February 2026 05:18:58 +0000 (0:00:02.020) 0:08:23.663 ******* 2026-02-13 05:19:17.616013 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:19:17.616022 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:19:17.616030 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:19:17.616037 | orchestrator | 2026-02-13 05:19:17.616045 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-13 05:19:17.616053 | orchestrator | Friday 13 February 2026 05:19:00 +0000 (0:00:02.346) 0:08:26.009 ******* 2026-02-13 05:19:17.616061 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:19:17.616069 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:19:17.616076 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:19:17.616084 | orchestrator | 2026-02-13 05:19:17.616092 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-13 05:19:17.616100 | orchestrator | Friday 13 February 2026 05:19:03 +0000 (0:00:02.960) 0:08:28.970 ******* 2026-02-13 05:19:17.616107 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.616115 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.616123 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.616131 | orchestrator | 2026-02-13 05:19:17.616139 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-13 05:19:17.616146 | orchestrator | Friday 13 February 2026 05:19:05 +0000 (0:00:01.398) 0:08:30.368 ******* 2026-02-13 05:19:17.616154 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.616162 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.616170 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.616177 | orchestrator | 2026-02-13 05:19:17.616192 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-13 05:19:17.616200 | orchestrator | Friday 13 February 2026 05:19:06 +0000 (0:00:01.396) 0:08:31.764 ******* 2026-02-13 05:19:17.616208 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.616220 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.616232 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.616246 | orchestrator | 2026-02-13 05:19:17.616258 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-13 05:19:17.616272 | orchestrator | Friday 13 February 2026 05:19:08 +0000 (0:00:01.695) 0:08:33.460 ******* 2026-02-13 05:19:17.616286 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.616299 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.616313 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.616325 | orchestrator | 2026-02-13 05:19:17.616333 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-13 05:19:17.616342 | orchestrator | Friday 13 February 2026 05:19:09 +0000 (0:00:01.373) 0:08:34.833 ******* 2026-02-13 05:19:17.616349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:17.616357 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:19:17.616365 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:19:17.616373 | orchestrator | 2026-02-13 05:19:17.616381 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-13 05:19:17.616389 | orchestrator | Friday 13 February 2026 05:19:10 +0000 (0:00:01.336) 0:08:36.170 ******* 2026-02-13 05:19:17.616397 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:19:17.616406 | orchestrator | 2026-02-13 05:19:17.616420 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-13 05:19:17.616428 | orchestrator | Friday 13 February 2026 05:19:13 +0000 (0:00:02.665) 0:08:38.836 ******* 2026-02-13 05:19:17.616438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-13 05:19:17.616456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-13 05:19:21.921096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-13 05:19:21.921211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:19:21.921253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:19:21.921266 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-13 05:19:21.921278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:19:21.921290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:19:21.921320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-13 05:19:21.921332 | orchestrator | 2026-02-13 05:19:21.921344 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-13 05:19:21.921356 | orchestrator | Friday 13 February 2026 05:19:17 +0000 (0:00:04.117) 0:08:42.953 ******* 2026-02-13 05:19:21.921368 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:19:21.921380 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:19:21.921390 | orchestrator | } 2026-02-13 05:19:21.921408 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:19:21.921417 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:19:21.921426 | orchestrator | } 2026-02-13 05:19:21.921449 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:19:21.921460 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:19:21.921471 | orchestrator | } 2026-02-13 05:19:21.921482 | orchestrator | 2026-02-13 05:19:21.921492 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:19:21.921501 | orchestrator | Friday 13 February 2026 05:19:19 +0000 (0:00:01.430) 0:08:44.383 ******* 2026-02-13 05:19:21.921511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-13 05:19:21.921565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:19:21.921577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:19:21.921588 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:19:21.921602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-13 05:19:21.921613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:19:21.921635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:21:22.850107 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.850235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-13 05:21:22.850258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-13 05:21:22.850272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-13 05:21:22.850286 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.850297 | orchestrator | 2026-02-13 05:21:22.850310 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-13 05:21:22.850323 | orchestrator | Friday 13 February 2026 05:19:21 +0000 (0:00:02.871) 0:08:47.254 ******* 2026-02-13 05:21:22.850354 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.850380 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.850412 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.850423 | orchestrator | 2026-02-13 05:21:22.850434 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-13 05:21:22.850463 | orchestrator | Friday 13 February 2026 05:19:23 +0000 (0:00:01.838) 0:08:49.093 ******* 2026-02-13 05:21:22.850475 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.850487 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.850499 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.850511 | orchestrator | 2026-02-13 05:21:22.850612 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-13 05:21:22.850631 | orchestrator | Friday 13 February 2026 05:19:25 +0000 (0:00:01.491) 0:08:50.585 ******* 2026-02-13 05:21:22.850646 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.850660 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.850674 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.850687 | orchestrator | 2026-02-13 05:21:22.850699 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-13 05:21:22.850711 | orchestrator | Friday 13 February 2026 05:19:32 +0000 (0:00:07.089) 0:08:57.674 ******* 2026-02-13 05:21:22.850746 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.850760 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.850772 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.850783 | orchestrator | 2026-02-13 05:21:22.850794 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-13 05:21:22.850804 | orchestrator | Friday 13 February 2026 05:19:39 +0000 (0:00:07.611) 0:09:05.286 ******* 2026-02-13 05:21:22.850816 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.850828 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.850839 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.850851 | orchestrator | 2026-02-13 05:21:22.850863 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-13 05:21:22.850875 | orchestrator | Friday 13 February 2026 05:19:47 +0000 (0:00:07.118) 0:09:12.404 ******* 2026-02-13 05:21:22.850887 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.850905 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.850918 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.850930 | orchestrator | 2026-02-13 05:21:22.850943 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-13 05:21:22.850955 | orchestrator | Friday 13 February 2026 05:19:54 +0000 (0:00:07.703) 0:09:20.108 ******* 2026-02-13 05:21:22.850967 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.850979 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.850991 | orchestrator | 2026-02-13 05:21:22.851003 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-13 05:21:22.851016 | orchestrator | Friday 13 February 2026 05:19:58 +0000 (0:00:03.998) 0:09:24.107 ******* 2026-02-13 05:21:22.851028 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.851040 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.851052 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.851065 | orchestrator | 2026-02-13 05:21:22.851097 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-13 05:21:22.851110 | orchestrator | Friday 13 February 2026 05:20:12 +0000 (0:00:13.512) 0:09:37.619 ******* 2026-02-13 05:21:22.851122 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.851134 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.851146 | orchestrator | 2026-02-13 05:21:22.851158 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-13 05:21:22.851171 | orchestrator | Friday 13 February 2026 05:20:16 +0000 (0:00:04.711) 0:09:42.331 ******* 2026-02-13 05:21:22.851184 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:22.851196 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:21:22.851209 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:21:22.851222 | orchestrator | 2026-02-13 05:21:22.851236 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-13 05:21:22.851248 | orchestrator | Friday 13 February 2026 05:20:24 +0000 (0:00:07.211) 0:09:49.543 ******* 2026-02-13 05:21:22.851260 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851272 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851284 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851297 | orchestrator | 2026-02-13 05:21:22.851309 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-13 05:21:22.851337 | orchestrator | Friday 13 February 2026 05:20:31 +0000 (0:00:06.815) 0:09:56.359 ******* 2026-02-13 05:21:22.851350 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851362 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851374 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851386 | orchestrator | 2026-02-13 05:21:22.851398 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-13 05:21:22.851410 | orchestrator | Friday 13 February 2026 05:20:37 +0000 (0:00:06.802) 0:10:03.161 ******* 2026-02-13 05:21:22.851422 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851434 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851446 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851468 | orchestrator | 2026-02-13 05:21:22.851480 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-13 05:21:22.851492 | orchestrator | Friday 13 February 2026 05:20:44 +0000 (0:00:07.026) 0:10:10.187 ******* 2026-02-13 05:21:22.851504 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851516 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851548 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851561 | orchestrator | 2026-02-13 05:21:22.851572 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-13 05:21:22.851584 | orchestrator | Friday 13 February 2026 05:20:51 +0000 (0:00:07.151) 0:10:17.339 ******* 2026-02-13 05:21:22.851596 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.851608 | orchestrator | 2026-02-13 05:21:22.851620 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-13 05:21:22.851632 | orchestrator | Friday 13 February 2026 05:20:55 +0000 (0:00:03.622) 0:10:20.961 ******* 2026-02-13 05:21:22.851644 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851656 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851668 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851680 | orchestrator | 2026-02-13 05:21:22.851691 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-13 05:21:22.851702 | orchestrator | Friday 13 February 2026 05:21:07 +0000 (0:00:12.102) 0:10:33.063 ******* 2026-02-13 05:21:22.851713 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.851725 | orchestrator | 2026-02-13 05:21:22.851746 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-13 05:21:22.851757 | orchestrator | Friday 13 February 2026 05:21:11 +0000 (0:00:03.587) 0:10:36.650 ******* 2026-02-13 05:21:22.851770 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:22.851781 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:22.851792 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:21:22.851805 | orchestrator | 2026-02-13 05:21:22.851817 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-13 05:21:22.851829 | orchestrator | Friday 13 February 2026 05:21:18 +0000 (0:00:06.782) 0:10:43.433 ******* 2026-02-13 05:21:22.851841 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.851853 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.851865 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.851877 | orchestrator | 2026-02-13 05:21:22.851889 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-13 05:21:22.851901 | orchestrator | Friday 13 February 2026 05:21:20 +0000 (0:00:01.991) 0:10:45.424 ******* 2026-02-13 05:21:22.851913 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:22.851925 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:22.851937 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:22.851949 | orchestrator | 2026-02-13 05:21:22.851961 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:21:22.851974 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-13 05:21:22.851988 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-13 05:21:22.852000 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-13 05:21:22.852012 | orchestrator | 2026-02-13 05:21:22.852024 | orchestrator | 2026-02-13 05:21:22.852036 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:21:22.852048 | orchestrator | Friday 13 February 2026 05:21:22 +0000 (0:00:02.746) 0:10:48.171 ******* 2026-02-13 05:21:22.852060 | orchestrator | =============================================================================== 2026-02-13 05:21:22.852072 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.51s 2026-02-13 05:21:22.852084 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.10s 2026-02-13 05:21:22.852105 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.85s 2026-02-13 05:21:22.852126 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.70s 2026-02-13 05:21:23.734237 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.61s 2026-02-13 05:21:23.734340 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.21s 2026-02-13 05:21:23.734358 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.15s 2026-02-13 05:21:23.734372 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.12s 2026-02-13 05:21:23.734387 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.09s 2026-02-13 05:21:23.734401 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 7.03s 2026-02-13 05:21:23.734412 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.82s 2026-02-13 05:21:23.734420 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.80s 2026-02-13 05:21:23.734429 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.78s 2026-02-13 05:21:23.734437 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.78s 2026-02-13 05:21:23.734445 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.77s 2026-02-13 05:21:23.734453 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.77s 2026-02-13 05:21:23.734460 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.73s 2026-02-13 05:21:23.734468 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.47s 2026-02-13 05:21:23.734476 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.25s 2026-02-13 05:21:23.734484 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.18s 2026-02-13 05:21:24.017611 | orchestrator | + osism apply -a upgrade opensearch 2026-02-13 05:21:26.032001 | orchestrator | 2026-02-13 05:21:26 | INFO  | Task b996dc67-d782-49f8-888b-35e224fd3fe6 (opensearch) was prepared for execution. 2026-02-13 05:21:26.032091 | orchestrator | 2026-02-13 05:21:26 | INFO  | It takes a moment until task b996dc67-d782-49f8-888b-35e224fd3fe6 (opensearch) has been started and output is visible here. 2026-02-13 05:21:43.835675 | orchestrator | 2026-02-13 05:21:43.835784 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:21:43.835801 | orchestrator | 2026-02-13 05:21:43.835813 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:21:43.835826 | orchestrator | Friday 13 February 2026 05:21:31 +0000 (0:00:01.570) 0:00:01.570 ******* 2026-02-13 05:21:43.835837 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:21:43.835849 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:21:43.835860 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:21:43.835871 | orchestrator | 2026-02-13 05:21:43.835882 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:21:43.835893 | orchestrator | Friday 13 February 2026 05:21:33 +0000 (0:00:01.640) 0:00:03.210 ******* 2026-02-13 05:21:43.835904 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-13 05:21:43.835932 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-13 05:21:43.835944 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-13 05:21:43.835955 | orchestrator | 2026-02-13 05:21:43.835965 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-13 05:21:43.835976 | orchestrator | 2026-02-13 05:21:43.835987 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 05:21:43.835998 | orchestrator | Friday 13 February 2026 05:21:35 +0000 (0:00:02.037) 0:00:05.248 ******* 2026-02-13 05:21:43.836009 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:21:43.836042 | orchestrator | 2026-02-13 05:21:43.836053 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-13 05:21:43.836063 | orchestrator | Friday 13 February 2026 05:21:37 +0000 (0:00:02.081) 0:00:07.330 ******* 2026-02-13 05:21:43.836074 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 05:21:43.836085 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 05:21:43.836095 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-13 05:21:43.836106 | orchestrator | 2026-02-13 05:21:43.836117 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-13 05:21:43.836127 | orchestrator | Friday 13 February 2026 05:21:39 +0000 (0:00:02.282) 0:00:09.613 ******* 2026-02-13 05:21:43.836141 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:43.836157 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:43.836185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:43.836206 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:43.836231 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:43.836246 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:43.836261 | orchestrator | 2026-02-13 05:21:43.836274 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 05:21:43.836286 | orchestrator | Friday 13 February 2026 05:21:42 +0000 (0:00:02.342) 0:00:11.955 ******* 2026-02-13 05:21:43.836299 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:21:43.836312 | orchestrator | 2026-02-13 05:21:43.836331 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-13 05:21:49.576952 | orchestrator | Friday 13 February 2026 05:21:43 +0000 (0:00:01.664) 0:00:13.620 ******* 2026-02-13 05:21:49.577042 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:49.577065 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:49.577070 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:49.577077 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:49.577097 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:49.577106 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:49.577112 | orchestrator | 2026-02-13 05:21:49.577117 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-13 05:21:49.577122 | orchestrator | Friday 13 February 2026 05:21:47 +0000 (0:00:03.927) 0:00:17.548 ******* 2026-02-13 05:21:49.577126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:49.577137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:51.396433 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:51.396607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:51.396627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:51.396642 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:51.396653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:51.396688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:51.396724 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:51.396737 | orchestrator | 2026-02-13 05:21:51.396749 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-13 05:21:51.396762 | orchestrator | Friday 13 February 2026 05:21:49 +0000 (0:00:01.817) 0:00:19.366 ******* 2026-02-13 05:21:51.396770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:51.396777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:51.396784 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:21:51.396791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:51.396814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:55.148911 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:21:55.149007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:21:55.149022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:21:55.149032 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:21:55.149040 | orchestrator | 2026-02-13 05:21:55.149050 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-13 05:21:55.149060 | orchestrator | Friday 13 February 2026 05:21:51 +0000 (0:00:01.818) 0:00:21.184 ******* 2026-02-13 05:21:55.149069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:55.149124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:55.149134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:21:55.149142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:55.149151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:21:55.149178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:22:08.694583 | orchestrator | 2026-02-13 05:22:08.694694 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-13 05:22:08.694711 | orchestrator | Friday 13 February 2026 05:21:55 +0000 (0:00:03.752) 0:00:24.937 ******* 2026-02-13 05:22:08.694724 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:22:08.694736 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:22:08.694747 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:22:08.694758 | orchestrator | 2026-02-13 05:22:08.694770 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-13 05:22:08.694781 | orchestrator | Friday 13 February 2026 05:21:58 +0000 (0:00:03.395) 0:00:28.332 ******* 2026-02-13 05:22:08.694792 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:22:08.694803 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:22:08.694814 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:22:08.694825 | orchestrator | 2026-02-13 05:22:08.694836 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-13 05:22:08.694847 | orchestrator | Friday 13 February 2026 05:22:01 +0000 (0:00:03.183) 0:00:31.515 ******* 2026-02-13 05:22:08.694861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:22:08.694903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:22:08.694944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-13 05:22:08.695005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:22:08.695032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:22:08.695066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-13 05:22:08.695088 | orchestrator | 2026-02-13 05:22:08.695111 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-13 05:22:08.695132 | orchestrator | Friday 13 February 2026 05:22:05 +0000 (0:00:03.656) 0:00:35.171 ******* 2026-02-13 05:22:08.695146 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:22:08.695160 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:22:08.695173 | orchestrator | } 2026-02-13 05:22:08.695192 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:22:08.695210 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:22:08.695223 | orchestrator | } 2026-02-13 05:22:08.695249 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:22:08.695263 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:22:08.695276 | orchestrator | } 2026-02-13 05:22:08.695288 | orchestrator | 2026-02-13 05:22:08.695301 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:22:08.695314 | orchestrator | Friday 13 February 2026 05:22:06 +0000 (0:00:01.328) 0:00:36.500 ******* 2026-02-13 05:22:08.695338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:25:16.529180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:25:16.529436 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:25:16.529474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:25:16.529516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:25:16.529530 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:25:16.529562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-13 05:25:16.529576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-13 05:25:16.529597 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:25:16.529609 | orchestrator | 2026-02-13 05:25:16.529622 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 05:25:16.529634 | orchestrator | Friday 13 February 2026 05:22:08 +0000 (0:00:01.978) 0:00:38.478 ******* 2026-02-13 05:25:16.529645 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:25:16.529656 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:25:16.529666 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:25:16.529677 | orchestrator | 2026-02-13 05:25:16.529688 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 05:25:16.529699 | orchestrator | Friday 13 February 2026 05:22:10 +0000 (0:00:01.566) 0:00:40.045 ******* 2026-02-13 05:25:16.529711 | orchestrator | 2026-02-13 05:25:16.529723 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 05:25:16.529736 | orchestrator | Friday 13 February 2026 05:22:10 +0000 (0:00:00.448) 0:00:40.494 ******* 2026-02-13 05:25:16.529748 | orchestrator | 2026-02-13 05:25:16.529760 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-13 05:25:16.529772 | orchestrator | Friday 13 February 2026 05:22:11 +0000 (0:00:00.424) 0:00:40.919 ******* 2026-02-13 05:25:16.529784 | orchestrator | 2026-02-13 05:25:16.529796 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-13 05:25:16.529808 | orchestrator | Friday 13 February 2026 05:22:11 +0000 (0:00:00.800) 0:00:41.720 ******* 2026-02-13 05:25:16.529821 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:25:16.529834 | orchestrator | 2026-02-13 05:25:16.529846 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-13 05:25:16.529858 | orchestrator | Friday 13 February 2026 05:22:15 +0000 (0:00:03.602) 0:00:45.322 ******* 2026-02-13 05:25:16.529871 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:25:16.529883 | orchestrator | 2026-02-13 05:25:16.529894 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-13 05:25:16.529906 | orchestrator | Friday 13 February 2026 05:22:24 +0000 (0:00:08.835) 0:00:54.158 ******* 2026-02-13 05:25:16.529919 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:25:16.529932 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:25:16.529944 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:25:16.529956 | orchestrator | 2026-02-13 05:25:16.529968 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-13 05:25:16.529986 | orchestrator | Friday 13 February 2026 05:23:30 +0000 (0:01:06.604) 0:02:00.763 ******* 2026-02-13 05:25:16.529998 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:25:16.530010 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:25:16.530134 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:25:16.530147 | orchestrator | 2026-02-13 05:25:16.530161 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-13 05:25:16.530180 | orchestrator | Friday 13 February 2026 05:25:07 +0000 (0:01:36.064) 0:03:36.827 ******* 2026-02-13 05:25:16.530203 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:25:16.530243 | orchestrator | 2026-02-13 05:25:16.530289 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-13 05:25:16.530307 | orchestrator | Friday 13 February 2026 05:25:08 +0000 (0:00:01.691) 0:03:38.519 ******* 2026-02-13 05:25:16.530324 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:25:16.530340 | orchestrator | 2026-02-13 05:25:16.530356 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-13 05:25:16.530372 | orchestrator | Friday 13 February 2026 05:25:11 +0000 (0:00:03.233) 0:03:41.753 ******* 2026-02-13 05:25:16.530389 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:25:16.530408 | orchestrator | 2026-02-13 05:25:16.530426 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-13 05:25:16.530444 | orchestrator | Friday 13 February 2026 05:25:15 +0000 (0:00:03.354) 0:03:45.107 ******* 2026-02-13 05:25:16.530462 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:25:16.530480 | orchestrator | 2026-02-13 05:25:16.530499 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-13 05:25:16.530531 | orchestrator | Friday 13 February 2026 05:25:16 +0000 (0:00:01.205) 0:03:46.313 ******* 2026-02-13 05:25:18.792581 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:25:18.792667 | orchestrator | 2026-02-13 05:25:18.792677 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:25:18.792686 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:25:18.792695 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:25:18.792702 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:25:18.792708 | orchestrator | 2026-02-13 05:25:18.792714 | orchestrator | 2026-02-13 05:25:18.792720 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:25:18.792727 | orchestrator | Friday 13 February 2026 05:25:18 +0000 (0:00:01.905) 0:03:48.218 ******* 2026-02-13 05:25:18.792733 | orchestrator | =============================================================================== 2026-02-13 05:25:18.792739 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 96.06s 2026-02-13 05:25:18.792745 | orchestrator | opensearch : Restart opensearch container ------------------------------ 66.60s 2026-02-13 05:25:18.792751 | orchestrator | opensearch : Perform a flush -------------------------------------------- 8.84s 2026-02-13 05:25:18.792757 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.93s 2026-02-13 05:25:18.792764 | orchestrator | opensearch : Copying over config.json files for services ---------------- 3.75s 2026-02-13 05:25:18.792770 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.66s 2026-02-13 05:25:18.792776 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.60s 2026-02-13 05:25:18.792782 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.40s 2026-02-13 05:25:18.792788 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.35s 2026-02-13 05:25:18.792794 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.23s 2026-02-13 05:25:18.792801 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.18s 2026-02-13 05:25:18.792807 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.34s 2026-02-13 05:25:18.792813 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.28s 2026-02-13 05:25:18.792819 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.08s 2026-02-13 05:25:18.792825 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.04s 2026-02-13 05:25:18.792831 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.98s 2026-02-13 05:25:18.792862 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 1.91s 2026-02-13 05:25:18.792869 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.82s 2026-02-13 05:25:18.792875 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.82s 2026-02-13 05:25:18.792882 | orchestrator | opensearch : include_tasks ---------------------------------------------- 1.69s 2026-02-13 05:25:19.084407 | orchestrator | + osism apply -a upgrade memcached 2026-02-13 05:25:21.137794 | orchestrator | 2026-02-13 05:25:21 | INFO  | Task c1083345-e0d2-4a76-b0f4-da2781fbf7cd (memcached) was prepared for execution. 2026-02-13 05:25:21.137900 | orchestrator | 2026-02-13 05:25:21 | INFO  | It takes a moment until task c1083345-e0d2-4a76-b0f4-da2781fbf7cd (memcached) has been started and output is visible here. 2026-02-13 05:25:53.079082 | orchestrator | 2026-02-13 05:25:53.079303 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:25:53.079342 | orchestrator | 2026-02-13 05:25:53.079382 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:25:53.079421 | orchestrator | Friday 13 February 2026 05:25:26 +0000 (0:00:01.389) 0:00:01.389 ******* 2026-02-13 05:25:53.079453 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:25:53.079473 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:25:53.079494 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:25:53.079513 | orchestrator | 2026-02-13 05:25:53.079533 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:25:53.079554 | orchestrator | Friday 13 February 2026 05:25:28 +0000 (0:00:01.754) 0:00:03.143 ******* 2026-02-13 05:25:53.079567 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-13 05:25:53.079578 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-13 05:25:53.079590 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-13 05:25:53.079603 | orchestrator | 2026-02-13 05:25:53.079616 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-13 05:25:53.079629 | orchestrator | 2026-02-13 05:25:53.079643 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-13 05:25:53.079656 | orchestrator | Friday 13 February 2026 05:25:30 +0000 (0:00:01.737) 0:00:04.880 ******* 2026-02-13 05:25:53.079670 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:25:53.079683 | orchestrator | 2026-02-13 05:25:53.079696 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-13 05:25:53.079710 | orchestrator | Friday 13 February 2026 05:25:32 +0000 (0:00:02.535) 0:00:07.416 ******* 2026-02-13 05:25:53.079723 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-13 05:25:53.079736 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-13 05:25:53.079749 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-13 05:25:53.079764 | orchestrator | 2026-02-13 05:25:53.079776 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-13 05:25:53.079788 | orchestrator | Friday 13 February 2026 05:25:34 +0000 (0:00:01.802) 0:00:09.218 ******* 2026-02-13 05:25:53.079799 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-13 05:25:53.079810 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-13 05:25:53.079822 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-13 05:25:53.079833 | orchestrator | 2026-02-13 05:25:53.079844 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-13 05:25:53.079855 | orchestrator | Friday 13 February 2026 05:25:36 +0000 (0:00:02.515) 0:00:11.734 ******* 2026-02-13 05:25:53.079871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:25:53.079908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:25:53.079943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-13 05:25:53.079956 | orchestrator | 2026-02-13 05:25:53.079974 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-13 05:25:53.079986 | orchestrator | Friday 13 February 2026 05:25:39 +0000 (0:00:02.191) 0:00:13.926 ******* 2026-02-13 05:25:53.079997 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:25:53.080009 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:25:53.080020 | orchestrator | } 2026-02-13 05:25:53.080032 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:25:53.080043 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:25:53.080053 | orchestrator | } 2026-02-13 05:25:53.080064 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:25:53.080075 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:25:53.080086 | orchestrator | } 2026-02-13 05:25:53.080098 | orchestrator | 2026-02-13 05:25:53.080109 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:25:53.080120 | orchestrator | Friday 13 February 2026 05:25:40 +0000 (0:00:01.296) 0:00:15.222 ******* 2026-02-13 05:25:53.080131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:25:53.080143 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:25:53.080155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:25:53.080174 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:25:53.080186 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-13 05:25:53.080197 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:25:53.080208 | orchestrator | 2026-02-13 05:25:53.080262 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-13 05:25:53.080284 | orchestrator | Friday 13 February 2026 05:25:42 +0000 (0:00:01.928) 0:00:17.151 ******* 2026-02-13 05:25:53.080302 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:25:53.080320 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:25:53.080339 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:25:53.080357 | orchestrator | 2026-02-13 05:25:53.080377 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:25:53.080398 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:25:53.080418 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:25:53.080431 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:25:53.080442 | orchestrator | 2026-02-13 05:25:53.080453 | orchestrator | 2026-02-13 05:25:53.080464 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:25:53.080484 | orchestrator | Friday 13 February 2026 05:25:53 +0000 (0:00:10.669) 0:00:27.821 ******* 2026-02-13 05:25:53.371406 | orchestrator | =============================================================================== 2026-02-13 05:25:53.371510 | orchestrator | memcached : Restart memcached container -------------------------------- 10.67s 2026-02-13 05:25:53.371526 | orchestrator | memcached : include_tasks ----------------------------------------------- 2.54s 2026-02-13 05:25:53.371538 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.52s 2026-02-13 05:25:53.371550 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.19s 2026-02-13 05:25:53.371561 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.93s 2026-02-13 05:25:53.371572 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.80s 2026-02-13 05:25:53.371584 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.75s 2026-02-13 05:25:53.371595 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.74s 2026-02-13 05:25:53.371628 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.30s 2026-02-13 05:25:53.659545 | orchestrator | + osism apply -a upgrade redis 2026-02-13 05:25:55.715700 | orchestrator | 2026-02-13 05:25:55 | INFO  | Task d2a2c15b-4816-4f1f-848b-1195dfaccf7d (redis) was prepared for execution. 2026-02-13 05:25:55.715833 | orchestrator | 2026-02-13 05:25:55 | INFO  | It takes a moment until task d2a2c15b-4816-4f1f-848b-1195dfaccf7d (redis) has been started and output is visible here. 2026-02-13 05:26:08.425018 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-13 05:26:08.425110 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-13 05:26:08.425131 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-13 05:26:08.425139 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-13 05:26:08.425156 | orchestrator | 2026-02-13 05:26:08.425166 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:26:08.425174 | orchestrator | 2026-02-13 05:26:08.425183 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:26:08.425191 | orchestrator | Friday 13 February 2026 05:26:01 +0000 (0:00:01.305) 0:00:01.305 ******* 2026-02-13 05:26:08.425199 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:26:08.425407 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:26:08.425439 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:26:08.425448 | orchestrator | 2026-02-13 05:26:08.425457 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:26:08.425465 | orchestrator | Friday 13 February 2026 05:26:02 +0000 (0:00:01.018) 0:00:02.323 ******* 2026-02-13 05:26:08.425473 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-13 05:26:08.425482 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-13 05:26:08.425490 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-13 05:26:08.425498 | orchestrator | 2026-02-13 05:26:08.425506 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-13 05:26:08.425514 | orchestrator | 2026-02-13 05:26:08.425522 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-13 05:26:08.425529 | orchestrator | Friday 13 February 2026 05:26:03 +0000 (0:00:01.071) 0:00:03.395 ******* 2026-02-13 05:26:08.425538 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:26:08.425547 | orchestrator | 2026-02-13 05:26:08.425557 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-13 05:26:08.425566 | orchestrator | Friday 13 February 2026 05:26:04 +0000 (0:00:01.678) 0:00:05.074 ******* 2026-02-13 05:26:08.425579 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425593 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425642 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425653 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425681 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425691 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425701 | orchestrator | 2026-02-13 05:26:08.425711 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-13 05:26:08.425720 | orchestrator | Friday 13 February 2026 05:26:06 +0000 (0:00:01.389) 0:00:06.463 ******* 2026-02-13 05:26:08.425730 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425740 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425763 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425773 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:08.425789 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248501 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248573 | orchestrator | 2026-02-13 05:26:13.248580 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-13 05:26:13.248585 | orchestrator | Friday 13 February 2026 05:26:08 +0000 (0:00:02.117) 0:00:08.581 ******* 2026-02-13 05:26:13.248590 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248596 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248626 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248630 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248635 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248650 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248654 | orchestrator | 2026-02-13 05:26:13.248658 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-13 05:26:13.248662 | orchestrator | Friday 13 February 2026 05:26:11 +0000 (0:00:02.824) 0:00:11.405 ******* 2026-02-13 05:26:13.248666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:13.248719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-13 05:26:35.915238 | orchestrator | 2026-02-13 05:26:35.915381 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-13 05:26:35.915411 | orchestrator | Friday 13 February 2026 05:26:13 +0000 (0:00:02.006) 0:00:13.412 ******* 2026-02-13 05:26:35.915434 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:26:35.915455 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:26:35.915474 | orchestrator | } 2026-02-13 05:26:35.915492 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:26:35.915526 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:26:35.915577 | orchestrator | } 2026-02-13 05:26:35.915596 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:26:35.915614 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:26:35.915631 | orchestrator | } 2026-02-13 05:26:35.915649 | orchestrator | 2026-02-13 05:26:35.915670 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:26:35.915693 | orchestrator | Friday 13 February 2026 05:26:13 +0000 (0:00:00.531) 0:00:13.944 ******* 2026-02-13 05:26:35.915718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-13 05:26:35.915765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-13 05:26:35.915792 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-13 05:26:35.915815 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-13 05:26:35.915859 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:26:35.915883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-13 05:26:35.915906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-13 05:26:35.915928 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:26:35.915974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-13 05:26:35.916008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-13 05:26:35.916029 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:26:35.916047 | orchestrator | 2026-02-13 05:26:35.916065 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 05:26:35.916084 | orchestrator | Friday 13 February 2026 05:26:14 +0000 (0:00:01.107) 0:00:15.051 ******* 2026-02-13 05:26:35.916103 | orchestrator | 2026-02-13 05:26:35.916120 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 05:26:35.916138 | orchestrator | Friday 13 February 2026 05:26:14 +0000 (0:00:00.083) 0:00:15.135 ******* 2026-02-13 05:26:35.916156 | orchestrator | 2026-02-13 05:26:35.916174 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-13 05:26:35.916227 | orchestrator | Friday 13 February 2026 05:26:15 +0000 (0:00:00.076) 0:00:15.211 ******* 2026-02-13 05:26:35.916246 | orchestrator | 2026-02-13 05:26:35.916262 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-13 05:26:35.916280 | orchestrator | Friday 13 February 2026 05:26:15 +0000 (0:00:00.071) 0:00:15.283 ******* 2026-02-13 05:26:35.916297 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:26:35.916317 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:26:35.916335 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:26:35.916354 | orchestrator | 2026-02-13 05:26:35.916365 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-13 05:26:35.916377 | orchestrator | Friday 13 February 2026 05:26:24 +0000 (0:00:09.873) 0:00:25.157 ******* 2026-02-13 05:26:35.916387 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:26:35.916399 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:26:35.916410 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:26:35.916421 | orchestrator | 2026-02-13 05:26:35.916439 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:26:35.916452 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:26:35.916465 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:26:35.916476 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:26:35.916487 | orchestrator | 2026-02-13 05:26:35.916498 | orchestrator | 2026-02-13 05:26:35.916509 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:26:35.916520 | orchestrator | Friday 13 February 2026 05:26:35 +0000 (0:00:10.554) 0:00:35.711 ******* 2026-02-13 05:26:35.916531 | orchestrator | =============================================================================== 2026-02-13 05:26:35.916542 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.55s 2026-02-13 05:26:35.916553 | orchestrator | redis : Restart redis container ----------------------------------------- 9.87s 2026-02-13 05:26:35.916564 | orchestrator | redis : Copying over redis config files --------------------------------- 2.82s 2026-02-13 05:26:35.916576 | orchestrator | redis : Copying over default config.json files -------------------------- 2.12s 2026-02-13 05:26:35.916595 | orchestrator | service-check-containers : redis | Check containers --------------------- 2.01s 2026-02-13 05:26:35.916606 | orchestrator | redis : include_tasks --------------------------------------------------- 1.68s 2026-02-13 05:26:35.916617 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.39s 2026-02-13 05:26:35.916628 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.11s 2026-02-13 05:26:35.916639 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.07s 2026-02-13 05:26:35.916650 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.02s 2026-02-13 05:26:35.916661 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 0.53s 2026-02-13 05:26:35.916672 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.23s 2026-02-13 05:26:36.189242 | orchestrator | + osism apply -a upgrade mariadb 2026-02-13 05:26:38.165657 | orchestrator | 2026-02-13 05:26:38 | INFO  | Task 110d6f39-1024-4d70-99e3-6829a76f1e42 (mariadb) was prepared for execution. 2026-02-13 05:26:38.165744 | orchestrator | 2026-02-13 05:26:38 | INFO  | It takes a moment until task 110d6f39-1024-4d70-99e3-6829a76f1e42 (mariadb) has been started and output is visible here. 2026-02-13 05:26:51.078577 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-13 05:26:51.078681 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-13 05:26:51.078707 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-13 05:26:51.078717 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-13 05:26:51.078737 | orchestrator | 2026-02-13 05:26:51.078748 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:26:51.078757 | orchestrator | 2026-02-13 05:26:51.078767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:26:51.078777 | orchestrator | Friday 13 February 2026 05:26:43 +0000 (0:00:00.915) 0:00:00.915 ******* 2026-02-13 05:26:51.078787 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:26:51.078798 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:26:51.078807 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:26:51.078817 | orchestrator | 2026-02-13 05:26:51.078828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:26:51.078845 | orchestrator | Friday 13 February 2026 05:26:44 +0000 (0:00:00.923) 0:00:01.839 ******* 2026-02-13 05:26:51.078868 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-13 05:26:51.078890 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-13 05:26:51.078907 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-13 05:26:51.078923 | orchestrator | 2026-02-13 05:26:51.078939 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-13 05:26:51.078954 | orchestrator | 2026-02-13 05:26:51.078971 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-13 05:26:51.078986 | orchestrator | Friday 13 February 2026 05:26:44 +0000 (0:00:00.839) 0:00:02.679 ******* 2026-02-13 05:26:51.079002 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:26:51.079019 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 05:26:51.079036 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 05:26:51.079052 | orchestrator | 2026-02-13 05:26:51.079068 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 05:26:51.079085 | orchestrator | Friday 13 February 2026 05:26:45 +0000 (0:00:00.394) 0:00:03.074 ******* 2026-02-13 05:26:51.079102 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:26:51.079142 | orchestrator | 2026-02-13 05:26:51.079155 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-13 05:26:51.079167 | orchestrator | Friday 13 February 2026 05:26:46 +0000 (0:00:01.255) 0:00:04.329 ******* 2026-02-13 05:26:51.079224 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:26:51.079264 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:26:51.079285 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:26:51.079306 | orchestrator | 2026-02-13 05:26:51.079318 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-13 05:26:51.079330 | orchestrator | Friday 13 February 2026 05:26:49 +0000 (0:00:02.739) 0:00:07.069 ******* 2026-02-13 05:26:51.079342 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:26:51.079354 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:26:51.079365 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:26:51.079377 | orchestrator | 2026-02-13 05:26:51.079388 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-13 05:26:51.079399 | orchestrator | Friday 13 February 2026 05:26:49 +0000 (0:00:00.547) 0:00:07.617 ******* 2026-02-13 05:26:51.079410 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:26:51.079422 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:26:51.079433 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:26:51.079444 | orchestrator | 2026-02-13 05:26:51.079455 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-13 05:26:51.079473 | orchestrator | Friday 13 February 2026 05:26:51 +0000 (0:00:01.190) 0:00:08.807 ******* 2026-02-13 05:27:02.721626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:02.721783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:02.721824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:02.721846 | orchestrator | 2026-02-13 05:27:02.721860 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-13 05:27:02.721872 | orchestrator | Friday 13 February 2026 05:26:54 +0000 (0:00:03.217) 0:00:12.024 ******* 2026-02-13 05:27:02.721883 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:02.721895 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:02.721906 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:27:02.721918 | orchestrator | 2026-02-13 05:27:02.721930 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-13 05:27:02.721941 | orchestrator | Friday 13 February 2026 05:26:55 +0000 (0:00:01.031) 0:00:13.056 ******* 2026-02-13 05:27:02.721951 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:27:02.721962 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:27:02.721973 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:27:02.721984 | orchestrator | 2026-02-13 05:27:02.722000 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 05:27:02.722012 | orchestrator | Friday 13 February 2026 05:26:59 +0000 (0:00:03.788) 0:00:16.844 ******* 2026-02-13 05:27:02.722091 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:27:02.722104 | orchestrator | 2026-02-13 05:27:02.722115 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-13 05:27:02.722126 | orchestrator | Friday 13 February 2026 05:27:00 +0000 (0:00:01.078) 0:00:17.923 ******* 2026-02-13 05:27:02.722147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:05.253530 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:05.253625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:05.253655 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:05.253664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:05.253671 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:05.253677 | orchestrator | 2026-02-13 05:27:05.253685 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-13 05:27:05.253691 | orchestrator | Friday 13 February 2026 05:27:02 +0000 (0:00:02.534) 0:00:20.457 ******* 2026-02-13 05:27:05.253713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:05.253725 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:05.253735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:05.253742 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:05.253755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:11.073963 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:11.074087 | orchestrator | 2026-02-13 05:27:11.074103 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-13 05:27:11.074116 | orchestrator | Friday 13 February 2026 05:27:05 +0000 (0:00:02.524) 0:00:22.981 ******* 2026-02-13 05:27:11.074146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:11.074218 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:11.074231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:11.074261 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:11.074295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:11.074307 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:11.074318 | orchestrator | 2026-02-13 05:27:11.074328 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-13 05:27:11.074337 | orchestrator | Friday 13 February 2026 05:27:08 +0000 (0:00:02.899) 0:00:25.881 ******* 2026-02-13 05:27:11.074348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:11.074377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:14.517612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-13 05:27:14.517785 | orchestrator | 2026-02-13 05:27:14.517817 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-13 05:27:14.517839 | orchestrator | Friday 13 February 2026 05:27:11 +0000 (0:00:02.930) 0:00:28.811 ******* 2026-02-13 05:27:14.517860 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:27:14.517881 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:27:14.517900 | orchestrator | } 2026-02-13 05:27:14.517919 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:27:14.517937 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:27:14.517956 | orchestrator | } 2026-02-13 05:27:14.517975 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:27:14.517987 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:27:14.517997 | orchestrator | } 2026-02-13 05:27:14.518008 | orchestrator | 2026-02-13 05:27:14.518091 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:27:14.518103 | orchestrator | Friday 13 February 2026 05:27:11 +0000 (0:00:00.366) 0:00:29.178 ******* 2026-02-13 05:27:14.518181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:14.518198 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:14.518224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:14.518241 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:14.518261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:14.518276 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:14.518289 | orchestrator | 2026-02-13 05:27:14.518303 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-13 05:27:14.518331 | orchestrator | Friday 13 February 2026 05:27:14 +0000 (0:00:03.070) 0:00:32.249 ******* 2026-02-13 05:27:23.188964 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189040 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189046 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189051 | orchestrator | 2026-02-13 05:27:23.189056 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-13 05:27:23.189061 | orchestrator | Friday 13 February 2026 05:27:14 +0000 (0:00:00.360) 0:00:32.609 ******* 2026-02-13 05:27:23.189065 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189069 | orchestrator | 2026-02-13 05:27:23.189073 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-13 05:27:23.189077 | orchestrator | Friday 13 February 2026 05:27:15 +0000 (0:00:00.142) 0:00:32.752 ******* 2026-02-13 05:27:23.189081 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189084 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189088 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189092 | orchestrator | 2026-02-13 05:27:23.189096 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-13 05:27:23.189099 | orchestrator | Friday 13 February 2026 05:27:15 +0000 (0:00:00.355) 0:00:33.107 ******* 2026-02-13 05:27:23.189103 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189107 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189110 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189114 | orchestrator | 2026-02-13 05:27:23.189118 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-13 05:27:23.189121 | orchestrator | Friday 13 February 2026 05:27:15 +0000 (0:00:00.542) 0:00:33.650 ******* 2026-02-13 05:27:23.189125 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189129 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189132 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189136 | orchestrator | 2026-02-13 05:27:23.189140 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-13 05:27:23.189188 | orchestrator | Friday 13 February 2026 05:27:16 +0000 (0:00:00.357) 0:00:34.008 ******* 2026-02-13 05:27:23.189192 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189196 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189200 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189204 | orchestrator | 2026-02-13 05:27:23.189207 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-13 05:27:23.189211 | orchestrator | Friday 13 February 2026 05:27:16 +0000 (0:00:00.321) 0:00:34.329 ******* 2026-02-13 05:27:23.189215 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189219 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189222 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189226 | orchestrator | 2026-02-13 05:27:23.189230 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-13 05:27:23.189233 | orchestrator | Friday 13 February 2026 05:27:16 +0000 (0:00:00.321) 0:00:34.651 ******* 2026-02-13 05:27:23.189237 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189241 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189244 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189248 | orchestrator | 2026-02-13 05:27:23.189252 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-13 05:27:23.189256 | orchestrator | Friday 13 February 2026 05:27:17 +0000 (0:00:00.550) 0:00:35.202 ******* 2026-02-13 05:27:23.189259 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:27:23.189264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:27:23.189270 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:27:23.189276 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189282 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-13 05:27:23.189294 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-13 05:27:23.189300 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-13 05:27:23.189325 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-13 05:27:23.189339 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-13 05:27:23.189346 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-13 05:27:23.189352 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189359 | orchestrator | 2026-02-13 05:27:23.189366 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-13 05:27:23.189383 | orchestrator | Friday 13 February 2026 05:27:17 +0000 (0:00:00.358) 0:00:35.560 ******* 2026-02-13 05:27:23.189388 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189391 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189395 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189399 | orchestrator | 2026-02-13 05:27:23.189403 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-13 05:27:23.189407 | orchestrator | Friday 13 February 2026 05:27:18 +0000 (0:00:00.341) 0:00:35.902 ******* 2026-02-13 05:27:23.189410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189414 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189418 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189422 | orchestrator | 2026-02-13 05:27:23.189425 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-13 05:27:23.189429 | orchestrator | Friday 13 February 2026 05:27:18 +0000 (0:00:00.502) 0:00:36.405 ******* 2026-02-13 05:27:23.189433 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189437 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189440 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189444 | orchestrator | 2026-02-13 05:27:23.189448 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-13 05:27:23.189453 | orchestrator | Friday 13 February 2026 05:27:19 +0000 (0:00:00.342) 0:00:36.747 ******* 2026-02-13 05:27:23.189457 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189460 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189464 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189468 | orchestrator | 2026-02-13 05:27:23.189472 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-13 05:27:23.189487 | orchestrator | Friday 13 February 2026 05:27:19 +0000 (0:00:00.339) 0:00:37.086 ******* 2026-02-13 05:27:23.189491 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189494 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189498 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189502 | orchestrator | 2026-02-13 05:27:23.189506 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-13 05:27:23.189509 | orchestrator | Friday 13 February 2026 05:27:19 +0000 (0:00:00.339) 0:00:37.426 ******* 2026-02-13 05:27:23.189513 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189517 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189521 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189524 | orchestrator | 2026-02-13 05:27:23.189528 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-13 05:27:23.189533 | orchestrator | Friday 13 February 2026 05:27:20 +0000 (0:00:00.517) 0:00:37.944 ******* 2026-02-13 05:27:23.189537 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189541 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189545 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189549 | orchestrator | 2026-02-13 05:27:23.189554 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-13 05:27:23.189558 | orchestrator | Friday 13 February 2026 05:27:20 +0000 (0:00:00.351) 0:00:38.295 ******* 2026-02-13 05:27:23.189563 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189567 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:23.189571 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:23.189580 | orchestrator | 2026-02-13 05:27:23.189584 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-13 05:27:23.189589 | orchestrator | Friday 13 February 2026 05:27:20 +0000 (0:00:00.322) 0:00:38.617 ******* 2026-02-13 05:27:23.189597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:23.189607 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:23.189616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:26.159639 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:26.159751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:26.159767 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:26.159777 | orchestrator | 2026-02-13 05:27:26.159786 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-13 05:27:26.159795 | orchestrator | Friday 13 February 2026 05:27:23 +0000 (0:00:02.309) 0:00:40.927 ******* 2026-02-13 05:27:26.159803 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:26.159811 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:26.159820 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:26.159828 | orchestrator | 2026-02-13 05:27:26.159849 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-13 05:27:26.159857 | orchestrator | Friday 13 February 2026 05:27:23 +0000 (0:00:00.522) 0:00:41.449 ******* 2026-02-13 05:27:26.159879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:26.159896 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:27:26.159905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:26.159914 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:27:26.159927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-13 05:27:26.159940 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:27:26.159948 | orchestrator | 2026-02-13 05:27:26.159957 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-13 05:27:26.159965 | orchestrator | Friday 13 February 2026 05:27:25 +0000 (0:00:02.250) 0:00:43.699 ******* 2026-02-13 05:27:26.159978 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.257980 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258112 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258122 | orchestrator | 2026-02-13 05:29:20.258128 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-13 05:29:20.258135 | orchestrator | Friday 13 February 2026 05:27:26 +0000 (0:00:00.693) 0:00:44.393 ******* 2026-02-13 05:29:20.258140 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258145 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258150 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258155 | orchestrator | 2026-02-13 05:29:20.258160 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-13 05:29:20.258166 | orchestrator | Friday 13 February 2026 05:27:27 +0000 (0:00:00.523) 0:00:44.917 ******* 2026-02-13 05:29:20.258171 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258176 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258181 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258186 | orchestrator | 2026-02-13 05:29:20.258191 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-13 05:29:20.258196 | orchestrator | Friday 13 February 2026 05:27:27 +0000 (0:00:00.339) 0:00:45.256 ******* 2026-02-13 05:29:20.258201 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258206 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258211 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258216 | orchestrator | 2026-02-13 05:29:20.258221 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-13 05:29:20.258226 | orchestrator | Friday 13 February 2026 05:27:28 +0000 (0:00:00.905) 0:00:46.162 ******* 2026-02-13 05:29:20.258230 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258235 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258240 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258245 | orchestrator | 2026-02-13 05:29:20.258250 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-13 05:29:20.258255 | orchestrator | Friday 13 February 2026 05:27:29 +0000 (0:00:00.882) 0:00:47.044 ******* 2026-02-13 05:29:20.258260 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258266 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258271 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258276 | orchestrator | 2026-02-13 05:29:20.258280 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-13 05:29:20.258285 | orchestrator | Friday 13 February 2026 05:27:30 +0000 (0:00:00.925) 0:00:47.970 ******* 2026-02-13 05:29:20.258290 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258295 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258300 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258304 | orchestrator | 2026-02-13 05:29:20.258309 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-13 05:29:20.258314 | orchestrator | Friday 13 February 2026 05:27:30 +0000 (0:00:00.352) 0:00:48.322 ******* 2026-02-13 05:29:20.258319 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258324 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258329 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258334 | orchestrator | 2026-02-13 05:29:20.258338 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-13 05:29:20.258343 | orchestrator | Friday 13 February 2026 05:27:30 +0000 (0:00:00.335) 0:00:48.657 ******* 2026-02-13 05:29:20.258365 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258371 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258385 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258391 | orchestrator | 2026-02-13 05:29:20.258396 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-13 05:29:20.258400 | orchestrator | Friday 13 February 2026 05:27:31 +0000 (0:00:01.052) 0:00:49.710 ******* 2026-02-13 05:29:20.258405 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258410 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258417 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258425 | orchestrator | 2026-02-13 05:29:20.258433 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-13 05:29:20.258446 | orchestrator | Friday 13 February 2026 05:27:32 +0000 (0:00:00.343) 0:00:50.053 ******* 2026-02-13 05:29:20.258453 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258461 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258469 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258477 | orchestrator | 2026-02-13 05:29:20.258484 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-13 05:29:20.258491 | orchestrator | Friday 13 February 2026 05:27:32 +0000 (0:00:00.344) 0:00:50.397 ******* 2026-02-13 05:29:20.258498 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258506 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258514 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258522 | orchestrator | 2026-02-13 05:29:20.258530 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-13 05:29:20.258538 | orchestrator | Friday 13 February 2026 05:27:35 +0000 (0:00:02.570) 0:00:52.968 ******* 2026-02-13 05:29:20.258544 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258549 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258555 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258560 | orchestrator | 2026-02-13 05:29:20.258566 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-13 05:29:20.258571 | orchestrator | Friday 13 February 2026 05:27:35 +0000 (0:00:00.580) 0:00:53.549 ******* 2026-02-13 05:29:20.258577 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258582 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258588 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258593 | orchestrator | 2026-02-13 05:29:20.258599 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-13 05:29:20.258605 | orchestrator | Friday 13 February 2026 05:27:36 +0000 (0:00:00.337) 0:00:53.886 ******* 2026-02-13 05:29:20.258610 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258615 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258620 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258625 | orchestrator | 2026-02-13 05:29:20.258629 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 05:29:20.258634 | orchestrator | Friday 13 February 2026 05:27:36 +0000 (0:00:00.723) 0:00:54.610 ******* 2026-02-13 05:29:20.258639 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258644 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258648 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258663 | orchestrator | 2026-02-13 05:29:20.258668 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-13 05:29:20.258673 | orchestrator | Friday 13 February 2026 05:27:37 +0000 (0:00:00.502) 0:00:55.112 ******* 2026-02-13 05:29:20.258678 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258683 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-13 05:29:20.258687 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-13 05:29:20.258697 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258702 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258707 | orchestrator | 2026-02-13 05:29:20.258720 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-13 05:29:20.258725 | orchestrator | Friday 13 February 2026 05:27:38 +0000 (0:00:00.763) 0:00:55.876 ******* 2026-02-13 05:29:20.258730 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:29:20.258734 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:29:20.258739 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:29:20.258744 | orchestrator | 2026-02-13 05:29:20.258749 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-13 05:29:20.258753 | orchestrator | Friday 13 February 2026 05:27:38 +0000 (0:00:00.569) 0:00:56.445 ******* 2026-02-13 05:29:20.258758 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:20.258763 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:20.258768 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:20.258772 | orchestrator | 2026-02-13 05:29:20.258777 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-13 05:29:20.258782 | orchestrator | 2026-02-13 05:29:20.258787 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 05:29:20.258792 | orchestrator | Friday 13 February 2026 05:27:39 +0000 (0:00:00.731) 0:00:57.176 ******* 2026-02-13 05:29:20.258796 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:29:20.258801 | orchestrator | 2026-02-13 05:29:20.258806 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 05:29:20.258811 | orchestrator | Friday 13 February 2026 05:28:03 +0000 (0:00:23.674) 0:01:20.851 ******* 2026-02-13 05:29:20.258815 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258820 | orchestrator | 2026-02-13 05:29:20.258825 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 05:29:20.258829 | orchestrator | Friday 13 February 2026 05:28:07 +0000 (0:00:04.757) 0:01:25.609 ******* 2026-02-13 05:29:20.258834 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:20.258839 | orchestrator | 2026-02-13 05:29:20.258843 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-13 05:29:20.258848 | orchestrator | 2026-02-13 05:29:20.258853 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 05:29:20.258858 | orchestrator | Friday 13 February 2026 05:28:10 +0000 (0:00:02.608) 0:01:28.218 ******* 2026-02-13 05:29:20.258862 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:29:20.258867 | orchestrator | 2026-02-13 05:29:20.258872 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 05:29:20.258876 | orchestrator | Friday 13 February 2026 05:28:35 +0000 (0:00:25.145) 0:01:53.363 ******* 2026-02-13 05:29:20.258885 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258890 | orchestrator | 2026-02-13 05:29:20.258895 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 05:29:20.258900 | orchestrator | Friday 13 February 2026 05:28:40 +0000 (0:00:04.654) 0:01:58.018 ******* 2026-02-13 05:29:20.258905 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:20.258909 | orchestrator | 2026-02-13 05:29:20.258914 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-13 05:29:20.258919 | orchestrator | 2026-02-13 05:29:20.258924 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-13 05:29:20.258928 | orchestrator | Friday 13 February 2026 05:28:43 +0000 (0:00:03.038) 0:02:01.057 ******* 2026-02-13 05:29:20.258933 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:29:20.258938 | orchestrator | 2026-02-13 05:29:20.258943 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-13 05:29:20.258947 | orchestrator | Friday 13 February 2026 05:29:07 +0000 (0:00:24.391) 0:02:25.448 ******* 2026-02-13 05:29:20.258952 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Wait for MariaDB service port liveness (10 retries left). 2026-02-13 05:29:20.258957 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.258962 | orchestrator | 2026-02-13 05:29:20.258967 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-13 05:29:20.258972 | orchestrator | Friday 13 February 2026 05:29:15 +0000 (0:00:08.143) 0:02:33.591 ******* 2026-02-13 05:29:20.258981 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-13 05:29:20.258986 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-13 05:29:20.258991 | orchestrator | mariadb_bootstrap_restart 2026-02-13 05:29:20.258996 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:20.259000 | orchestrator | 2026-02-13 05:29:20.259005 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-13 05:29:20.259010 | orchestrator | skipping: no hosts matched 2026-02-13 05:29:20.259015 | orchestrator | 2026-02-13 05:29:20.259019 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-13 05:29:20.259024 | orchestrator | skipping: no hosts matched 2026-02-13 05:29:20.259029 | orchestrator | 2026-02-13 05:29:20.259033 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-13 05:29:20.259038 | orchestrator | 2026-02-13 05:29:20.259043 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-13 05:29:20.259048 | orchestrator | Friday 13 February 2026 05:29:19 +0000 (0:00:03.313) 0:02:36.905 ******* 2026-02-13 05:29:20.259088 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:29:20.259100 | orchestrator | 2026-02-13 05:29:20.259111 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-13 05:29:20.259125 | orchestrator | Friday 13 February 2026 05:29:20 +0000 (0:00:01.080) 0:02:37.986 ******* 2026-02-13 05:29:58.417154 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417305 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417331 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:58.417352 | orchestrator | 2026-02-13 05:29:58.417371 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-13 05:29:58.417391 | orchestrator | Friday 13 February 2026 05:29:22 +0000 (0:00:02.157) 0:02:40.144 ******* 2026-02-13 05:29:58.417409 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417428 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417446 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:29:58.417463 | orchestrator | 2026-02-13 05:29:58.417483 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-13 05:29:58.417502 | orchestrator | Friday 13 February 2026 05:29:24 +0000 (0:00:02.267) 0:02:42.411 ******* 2026-02-13 05:29:58.417521 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417540 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417559 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:58.417578 | orchestrator | 2026-02-13 05:29:58.417597 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-13 05:29:58.417614 | orchestrator | Friday 13 February 2026 05:29:26 +0000 (0:00:02.248) 0:02:44.659 ******* 2026-02-13 05:29:58.417627 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417639 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417666 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:29:58.417679 | orchestrator | 2026-02-13 05:29:58.417692 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-13 05:29:58.417705 | orchestrator | Friday 13 February 2026 05:29:29 +0000 (0:00:02.303) 0:02:46.963 ******* 2026-02-13 05:29:58.417718 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:58.417729 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:58.417740 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:58.417751 | orchestrator | 2026-02-13 05:29:58.417762 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-13 05:29:58.417774 | orchestrator | Friday 13 February 2026 05:29:34 +0000 (0:00:04.954) 0:02:51.918 ******* 2026-02-13 05:29:58.417785 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:58.417796 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417806 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417817 | orchestrator | 2026-02-13 05:29:58.417828 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-13 05:29:58.417865 | orchestrator | Friday 13 February 2026 05:29:36 +0000 (0:00:02.098) 0:02:54.016 ******* 2026-02-13 05:29:58.417877 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:29:58.417888 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:29:58.417898 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:29:58.417909 | orchestrator | 2026-02-13 05:29:58.417920 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-13 05:29:58.417931 | orchestrator | Friday 13 February 2026 05:29:36 +0000 (0:00:00.650) 0:02:54.667 ******* 2026-02-13 05:29:58.417942 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:29:58.417953 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:29:58.417964 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:29:58.417974 | orchestrator | 2026-02-13 05:29:58.417985 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-13 05:29:58.417996 | orchestrator | Friday 13 February 2026 05:29:39 +0000 (0:00:02.684) 0:02:57.351 ******* 2026-02-13 05:29:58.418148 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:29:58.418169 | orchestrator | 2026-02-13 05:29:58.418180 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-13 05:29:58.418191 | orchestrator | Friday 13 February 2026 05:29:40 +0000 (0:00:01.023) 0:02:58.374 ******* 2026-02-13 05:29:58.418201 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:29:58.418212 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:29:58.418223 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:29:58.418235 | orchestrator | 2026-02-13 05:29:58.418245 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:29:58.418258 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-13 05:29:58.418271 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-13 05:29:58.418281 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-13 05:29:58.418292 | orchestrator | 2026-02-13 05:29:58.418303 | orchestrator | 2026-02-13 05:29:58.418314 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:29:58.418325 | orchestrator | Friday 13 February 2026 05:29:57 +0000 (0:00:17.345) 0:03:15.720 ******* 2026-02-13 05:29:58.418335 | orchestrator | =============================================================================== 2026-02-13 05:29:58.418346 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 73.21s 2026-02-13 05:29:58.418357 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 17.56s 2026-02-13 05:29:58.418367 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.35s 2026-02-13 05:29:58.418378 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 8.96s 2026-02-13 05:29:58.418389 | orchestrator | service-check : mariadb | Get container facts --------------------------- 4.95s 2026-02-13 05:29:58.418399 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.79s 2026-02-13 05:29:58.418410 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.22s 2026-02-13 05:29:58.418421 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.07s 2026-02-13 05:29:58.418431 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 2.93s 2026-02-13 05:29:58.418470 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.90s 2026-02-13 05:29:58.418491 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.74s 2026-02-13 05:29:58.418509 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.68s 2026-02-13 05:29:58.418527 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.57s 2026-02-13 05:29:58.418559 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.53s 2026-02-13 05:29:58.418576 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.52s 2026-02-13 05:29:58.418592 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.31s 2026-02-13 05:29:58.418609 | orchestrator | mariadb : Granting permissions on Mariabackup database to backup user --- 2.30s 2026-02-13 05:29:58.418626 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.27s 2026-02-13 05:29:58.418642 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.25s 2026-02-13 05:29:58.418659 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.25s 2026-02-13 05:29:58.693278 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-13 05:30:00.772434 | orchestrator | 2026-02-13 05:30:00 | INFO  | Task da048864-ec5a-4b2f-9864-244a8312d18c (rabbitmq) was prepared for execution. 2026-02-13 05:30:00.772538 | orchestrator | 2026-02-13 05:30:00 | INFO  | It takes a moment until task da048864-ec5a-4b2f-9864-244a8312d18c (rabbitmq) has been started and output is visible here. 2026-02-13 05:30:44.383049 | orchestrator | 2026-02-13 05:30:44.383169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:30:44.383188 | orchestrator | 2026-02-13 05:30:44.383200 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:30:44.383212 | orchestrator | Friday 13 February 2026 05:30:06 +0000 (0:00:01.313) 0:00:01.313 ******* 2026-02-13 05:30:44.383224 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:30:44.383236 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:30:44.383248 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:30:44.383259 | orchestrator | 2026-02-13 05:30:44.383270 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:30:44.383282 | orchestrator | Friday 13 February 2026 05:30:08 +0000 (0:00:01.897) 0:00:03.210 ******* 2026-02-13 05:30:44.383293 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-13 05:30:44.383305 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-13 05:30:44.383316 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-13 05:30:44.383327 | orchestrator | 2026-02-13 05:30:44.383338 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-13 05:30:44.383349 | orchestrator | 2026-02-13 05:30:44.383361 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 05:30:44.383372 | orchestrator | Friday 13 February 2026 05:30:10 +0000 (0:00:02.778) 0:00:05.989 ******* 2026-02-13 05:30:44.383402 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:30:44.383415 | orchestrator | 2026-02-13 05:30:44.383426 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-13 05:30:44.383437 | orchestrator | Friday 13 February 2026 05:30:12 +0000 (0:00:01.969) 0:00:07.959 ******* 2026-02-13 05:30:44.383448 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:30:44.383459 | orchestrator | 2026-02-13 05:30:44.383471 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-13 05:30:44.383482 | orchestrator | Friday 13 February 2026 05:30:15 +0000 (0:00:02.291) 0:00:10.250 ******* 2026-02-13 05:30:44.383493 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:30:44.383505 | orchestrator | 2026-02-13 05:30:44.383518 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-13 05:30:44.383531 | orchestrator | Friday 13 February 2026 05:30:18 +0000 (0:00:03.332) 0:00:13.583 ******* 2026-02-13 05:30:44.383544 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:30:44.383557 | orchestrator | 2026-02-13 05:30:44.383570 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-13 05:30:44.383583 | orchestrator | Friday 13 February 2026 05:30:28 +0000 (0:00:09.877) 0:00:23.461 ******* 2026-02-13 05:30:44.383595 | orchestrator | ok: [testbed-node-0] => { 2026-02-13 05:30:44.383630 | orchestrator |  "changed": false, 2026-02-13 05:30:44.383644 | orchestrator |  "msg": "All assertions passed" 2026-02-13 05:30:44.383657 | orchestrator | } 2026-02-13 05:30:44.383670 | orchestrator | 2026-02-13 05:30:44.383683 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-13 05:30:44.383695 | orchestrator | Friday 13 February 2026 05:30:29 +0000 (0:00:01.283) 0:00:24.745 ******* 2026-02-13 05:30:44.383708 | orchestrator | ok: [testbed-node-0] => { 2026-02-13 05:30:44.383721 | orchestrator |  "changed": false, 2026-02-13 05:30:44.383733 | orchestrator |  "msg": "All assertions passed" 2026-02-13 05:30:44.383746 | orchestrator | } 2026-02-13 05:30:44.383758 | orchestrator | 2026-02-13 05:30:44.383771 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 05:30:44.383784 | orchestrator | Friday 13 February 2026 05:30:31 +0000 (0:00:01.653) 0:00:26.399 ******* 2026-02-13 05:30:44.383796 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:30:44.383810 | orchestrator | 2026-02-13 05:30:44.383824 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-13 05:30:44.383837 | orchestrator | Friday 13 February 2026 05:30:33 +0000 (0:00:01.707) 0:00:28.106 ******* 2026-02-13 05:30:44.383848 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:30:44.383860 | orchestrator | 2026-02-13 05:30:44.383871 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-13 05:30:44.383882 | orchestrator | Friday 13 February 2026 05:30:35 +0000 (0:00:02.201) 0:00:30.308 ******* 2026-02-13 05:30:44.383893 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:30:44.383905 | orchestrator | 2026-02-13 05:30:44.383916 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-13 05:30:44.383927 | orchestrator | Friday 13 February 2026 05:30:38 +0000 (0:00:02.987) 0:00:33.295 ******* 2026-02-13 05:30:44.383951 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:30:44.383963 | orchestrator | 2026-02-13 05:30:44.383973 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-13 05:30:44.383985 | orchestrator | Friday 13 February 2026 05:30:40 +0000 (0:00:01.872) 0:00:35.168 ******* 2026-02-13 05:30:44.384075 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:30:44.384101 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:30:44.384126 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:30:44.384138 | orchestrator | 2026-02-13 05:30:44.384150 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-13 05:30:44.384161 | orchestrator | Friday 13 February 2026 05:30:41 +0000 (0:00:01.797) 0:00:36.966 ******* 2026-02-13 05:30:44.384174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:30:44.384196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:03.593281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:03.593417 | orchestrator | 2026-02-13 05:31:03.593437 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-13 05:31:03.593450 | orchestrator | Friday 13 February 2026 05:30:44 +0000 (0:00:02.481) 0:00:39.448 ******* 2026-02-13 05:31:03.593461 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 05:31:03.593474 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 05:31:03.593494 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-13 05:31:03.593513 | orchestrator | 2026-02-13 05:31:03.593531 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-13 05:31:03.593551 | orchestrator | Friday 13 February 2026 05:30:46 +0000 (0:00:02.375) 0:00:41.823 ******* 2026-02-13 05:31:03.593570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 05:31:03.593588 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 05:31:03.593607 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-13 05:31:03.593627 | orchestrator | 2026-02-13 05:31:03.593647 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-13 05:31:03.593667 | orchestrator | Friday 13 February 2026 05:30:49 +0000 (0:00:03.044) 0:00:44.867 ******* 2026-02-13 05:31:03.593686 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 05:31:03.593705 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 05:31:03.593725 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-13 05:31:03.593744 | orchestrator | 2026-02-13 05:31:03.593760 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-13 05:31:03.593772 | orchestrator | Friday 13 February 2026 05:30:52 +0000 (0:00:02.392) 0:00:47.260 ******* 2026-02-13 05:31:03.593783 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 05:31:03.593794 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 05:31:03.593805 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-13 05:31:03.593818 | orchestrator | 2026-02-13 05:31:03.593837 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-13 05:31:03.593855 | orchestrator | Friday 13 February 2026 05:30:54 +0000 (0:00:02.406) 0:00:49.667 ******* 2026-02-13 05:31:03.593872 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 05:31:03.593890 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 05:31:03.593908 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-13 05:31:03.594126 | orchestrator | 2026-02-13 05:31:03.594151 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-13 05:31:03.594171 | orchestrator | Friday 13 February 2026 05:30:56 +0000 (0:00:02.244) 0:00:51.911 ******* 2026-02-13 05:31:03.594189 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 05:31:03.594204 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 05:31:03.594216 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-13 05:31:03.594235 | orchestrator | 2026-02-13 05:31:03.594253 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-13 05:31:03.594272 | orchestrator | Friday 13 February 2026 05:30:59 +0000 (0:00:02.494) 0:00:54.406 ******* 2026-02-13 05:31:03.594290 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:31:03.594310 | orchestrator | 2026-02-13 05:31:03.594353 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-13 05:31:03.594373 | orchestrator | Friday 13 February 2026 05:31:01 +0000 (0:00:01.750) 0:00:56.156 ******* 2026-02-13 05:31:03.594407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:03.594427 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:03.594441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:03.594462 | orchestrator | 2026-02-13 05:31:03.594473 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-13 05:31:03.594484 | orchestrator | Friday 13 February 2026 05:31:03 +0000 (0:00:02.289) 0:00:58.446 ******* 2026-02-13 05:31:03.594506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605495 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:31:12.605606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605627 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:31:12.605639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605676 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:31:12.605688 | orchestrator | 2026-02-13 05:31:12.605700 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-13 05:31:12.605713 | orchestrator | Friday 13 February 2026 05:31:04 +0000 (0:00:01.424) 0:00:59.871 ******* 2026-02-13 05:31:12.605765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605815 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:31:12.605826 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:31:12.605837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:31:12.605849 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:31:12.605859 | orchestrator | 2026-02-13 05:31:12.605869 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-13 05:31:12.605888 | orchestrator | Friday 13 February 2026 05:31:06 +0000 (0:00:01.797) 0:01:01.668 ******* 2026-02-13 05:31:12.605900 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:31:12.605912 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:31:12.605922 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:31:12.605933 | orchestrator | 2026-02-13 05:31:12.605943 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-13 05:31:12.605953 | orchestrator | Friday 13 February 2026 05:31:10 +0000 (0:00:03.809) 0:01:05.478 ******* 2026-02-13 05:31:12.605964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:31:12.606093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:33:01.451025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-13 05:33:01.451143 | orchestrator | 2026-02-13 05:33:01.451160 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-13 05:33:01.451174 | orchestrator | Friday 13 February 2026 05:31:12 +0000 (0:00:02.197) 0:01:07.676 ******* 2026-02-13 05:33:01.451212 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:33:01.451226 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:01.451237 | orchestrator | } 2026-02-13 05:33:01.451249 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:33:01.451261 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:01.451273 | orchestrator | } 2026-02-13 05:33:01.451284 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:33:01.451296 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:01.451308 | orchestrator | } 2026-02-13 05:33:01.451320 | orchestrator | 2026-02-13 05:33:01.451331 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:33:01.451343 | orchestrator | Friday 13 February 2026 05:31:13 +0000 (0:00:01.361) 0:01:09.038 ******* 2026-02-13 05:33:01.451356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:33:01.451369 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:33:01.451397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:33:01.451409 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:33:01.451440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-13 05:33:01.451460 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:33:01.451471 | orchestrator | 2026-02-13 05:33:01.451483 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-13 05:33:01.451494 | orchestrator | Friday 13 February 2026 05:31:16 +0000 (0:00:02.305) 0:01:11.344 ******* 2026-02-13 05:33:01.451505 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:33:01.451516 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:33:01.451527 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:33:01.451539 | orchestrator | 2026-02-13 05:33:01.451551 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 05:33:01.451562 | orchestrator | 2026-02-13 05:33:01.451574 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 05:33:01.451586 | orchestrator | Friday 13 February 2026 05:31:18 +0000 (0:00:02.276) 0:01:13.620 ******* 2026-02-13 05:33:01.451599 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:33:01.451611 | orchestrator | 2026-02-13 05:33:01.451623 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 05:33:01.451636 | orchestrator | Friday 13 February 2026 05:31:20 +0000 (0:00:02.002) 0:01:15.622 ******* 2026-02-13 05:33:01.451648 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:33:01.451660 | orchestrator | 2026-02-13 05:33:01.451672 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 05:33:01.451684 | orchestrator | Friday 13 February 2026 05:31:30 +0000 (0:00:09.515) 0:01:25.138 ******* 2026-02-13 05:33:01.451696 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:33:01.451708 | orchestrator | 2026-02-13 05:33:01.451719 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 05:33:01.451730 | orchestrator | Friday 13 February 2026 05:31:39 +0000 (0:00:09.116) 0:01:34.254 ******* 2026-02-13 05:33:01.451741 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:33:01.451752 | orchestrator | 2026-02-13 05:33:01.451763 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 05:33:01.451774 | orchestrator | 2026-02-13 05:33:01.451785 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 05:33:01.451797 | orchestrator | Friday 13 February 2026 05:31:48 +0000 (0:00:09.759) 0:01:44.014 ******* 2026-02-13 05:33:01.451808 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:33:01.451820 | orchestrator | 2026-02-13 05:33:01.451832 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 05:33:01.451843 | orchestrator | Friday 13 February 2026 05:31:50 +0000 (0:00:01.743) 0:01:45.758 ******* 2026-02-13 05:33:01.451855 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:33:01.451867 | orchestrator | 2026-02-13 05:33:01.451879 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 05:33:01.451890 | orchestrator | Friday 13 February 2026 05:32:00 +0000 (0:00:09.399) 0:01:55.157 ******* 2026-02-13 05:33:01.451943 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:33:01.451956 | orchestrator | 2026-02-13 05:33:01.451968 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 05:33:01.451980 | orchestrator | Friday 13 February 2026 05:32:14 +0000 (0:00:13.945) 0:02:09.103 ******* 2026-02-13 05:33:01.451992 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:33:01.452004 | orchestrator | 2026-02-13 05:33:01.452016 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-13 05:33:01.452028 | orchestrator | 2026-02-13 05:33:01.452040 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-13 05:33:01.452052 | orchestrator | Friday 13 February 2026 05:32:23 +0000 (0:00:09.727) 0:02:18.830 ******* 2026-02-13 05:33:01.452064 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:33:01.452077 | orchestrator | 2026-02-13 05:33:01.452089 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-13 05:33:01.452110 | orchestrator | Friday 13 February 2026 05:32:25 +0000 (0:00:01.813) 0:02:20.644 ******* 2026-02-13 05:33:01.452122 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:33:01.452135 | orchestrator | 2026-02-13 05:33:01.452146 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-13 05:33:01.452159 | orchestrator | Friday 13 February 2026 05:32:35 +0000 (0:00:10.378) 0:02:31.022 ******* 2026-02-13 05:33:01.452171 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:33:01.452183 | orchestrator | 2026-02-13 05:33:01.452195 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-13 05:33:01.452213 | orchestrator | Friday 13 February 2026 05:32:50 +0000 (0:00:14.343) 0:02:45.366 ******* 2026-02-13 05:33:01.452225 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:33:01.452237 | orchestrator | 2026-02-13 05:33:01.452249 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-13 05:33:01.452261 | orchestrator | 2026-02-13 05:33:01.452272 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-13 05:33:01.452290 | orchestrator | Friday 13 February 2026 05:33:01 +0000 (0:00:11.146) 0:02:56.513 ******* 2026-02-13 05:33:07.442445 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:33:07.442532 | orchestrator | 2026-02-13 05:33:07.442543 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-13 05:33:07.442552 | orchestrator | Friday 13 February 2026 05:33:02 +0000 (0:00:01.321) 0:02:57.834 ******* 2026-02-13 05:33:07.442559 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:33:07.442568 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:33:07.442575 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:33:07.442582 | orchestrator | 2026-02-13 05:33:07.442590 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:33:07.442602 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:33:07.442617 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 05:33:07.442628 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-13 05:33:07.442640 | orchestrator | 2026-02-13 05:33:07.442652 | orchestrator | 2026-02-13 05:33:07.442664 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:33:07.442676 | orchestrator | Friday 13 February 2026 05:33:07 +0000 (0:00:04.350) 0:03:02.185 ******* 2026-02-13 05:33:07.442688 | orchestrator | =============================================================================== 2026-02-13 05:33:07.442700 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 37.41s 2026-02-13 05:33:07.442712 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 30.63s 2026-02-13 05:33:07.442724 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 29.29s 2026-02-13 05:33:07.442737 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.88s 2026-02-13 05:33:07.442750 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 5.56s 2026-02-13 05:33:07.442762 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.35s 2026-02-13 05:33:07.442776 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 3.81s 2026-02-13 05:33:07.442785 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 3.33s 2026-02-13 05:33:07.442792 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.04s 2026-02-13 05:33:07.442803 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 2.99s 2026-02-13 05:33:07.442815 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.78s 2026-02-13 05:33:07.442828 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.49s 2026-02-13 05:33:07.442870 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.48s 2026-02-13 05:33:07.442884 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.41s 2026-02-13 05:33:07.442952 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 2.39s 2026-02-13 05:33:07.442962 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.38s 2026-02-13 05:33:07.442969 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.31s 2026-02-13 05:33:07.442977 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 2.29s 2026-02-13 05:33:07.442985 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 2.29s 2026-02-13 05:33:07.442994 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 2.28s 2026-02-13 05:33:07.719460 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-13 05:33:09.763305 | orchestrator | 2026-02-13 05:33:09 | INFO  | Task c896eda0-44e1-40ba-b274-c096cf234e1f (openvswitch) was prepared for execution. 2026-02-13 05:33:09.763399 | orchestrator | 2026-02-13 05:33:09 | INFO  | It takes a moment until task c896eda0-44e1-40ba-b274-c096cf234e1f (openvswitch) has been started and output is visible here. 2026-02-13 05:33:35.897114 | orchestrator | 2026-02-13 05:33:35.897217 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:33:35.897232 | orchestrator | 2026-02-13 05:33:35.897242 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:33:35.897253 | orchestrator | Friday 13 February 2026 05:33:15 +0000 (0:00:01.629) 0:00:01.629 ******* 2026-02-13 05:33:35.897263 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:33:35.897274 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:33:35.897284 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:33:35.897305 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:33:35.897315 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:33:35.897325 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:33:35.897336 | orchestrator | 2026-02-13 05:33:35.897345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:33:35.897355 | orchestrator | Friday 13 February 2026 05:33:17 +0000 (0:00:02.278) 0:00:03.908 ******* 2026-02-13 05:33:35.897383 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897395 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897405 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897416 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897426 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897435 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-13 05:33:35.897444 | orchestrator | 2026-02-13 05:33:35.897454 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-13 05:33:35.897465 | orchestrator | 2026-02-13 05:33:35.897475 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-13 05:33:35.897485 | orchestrator | Friday 13 February 2026 05:33:21 +0000 (0:00:03.820) 0:00:07.728 ******* 2026-02-13 05:33:35.897497 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:33:35.897508 | orchestrator | 2026-02-13 05:33:35.897518 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-13 05:33:35.897528 | orchestrator | Friday 13 February 2026 05:33:23 +0000 (0:00:02.277) 0:00:10.006 ******* 2026-02-13 05:33:35.897538 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-13 05:33:35.897548 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-13 05:33:35.897578 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-13 05:33:35.897589 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-13 05:33:35.897599 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-13 05:33:35.897610 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-13 05:33:35.897620 | orchestrator | 2026-02-13 05:33:35.897631 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-13 05:33:35.897641 | orchestrator | Friday 13 February 2026 05:33:25 +0000 (0:00:02.172) 0:00:12.178 ******* 2026-02-13 05:33:35.897651 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-13 05:33:35.897663 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-13 05:33:35.897674 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-13 05:33:35.897685 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-13 05:33:35.897696 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-13 05:33:35.897707 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-13 05:33:35.897718 | orchestrator | 2026-02-13 05:33:35.897729 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-13 05:33:35.897740 | orchestrator | Friday 13 February 2026 05:33:28 +0000 (0:00:02.637) 0:00:14.816 ******* 2026-02-13 05:33:35.897752 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-13 05:33:35.897763 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:33:35.897776 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-13 05:33:35.897787 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:33:35.897798 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-13 05:33:35.897810 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:33:35.897821 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-13 05:33:35.897832 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:33:35.897843 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-13 05:33:35.897854 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:33:35.897922 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-13 05:33:35.897934 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:33:35.897946 | orchestrator | 2026-02-13 05:33:35.897957 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-13 05:33:35.897968 | orchestrator | Friday 13 February 2026 05:33:31 +0000 (0:00:02.587) 0:00:17.403 ******* 2026-02-13 05:33:35.897979 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:33:35.897989 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:33:35.897999 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:33:35.898010 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:33:35.898080 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:33:35.898090 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:33:35.898101 | orchestrator | 2026-02-13 05:33:35.898111 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-13 05:33:35.898122 | orchestrator | Friday 13 February 2026 05:33:33 +0000 (0:00:02.035) 0:00:19.439 ******* 2026-02-13 05:33:35.898156 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898181 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898201 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898213 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898236 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:35.898259 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196053 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196133 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196138 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196142 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196147 | orchestrator | 2026-02-13 05:33:38.196152 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-13 05:33:38.196174 | orchestrator | Friday 13 February 2026 05:33:35 +0000 (0:00:02.717) 0:00:22.156 ******* 2026-02-13 05:33:38.196198 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196203 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196207 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196211 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196215 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196219 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:38.196244 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889424 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889545 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889574 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889628 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889641 | orchestrator | 2026-02-13 05:33:43.889654 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-13 05:33:43.889667 | orchestrator | Friday 13 February 2026 05:33:39 +0000 (0:00:03.476) 0:00:25.633 ******* 2026-02-13 05:33:43.889678 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:33:43.889690 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:33:43.889700 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:33:43.889711 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:33:43.889722 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:33:43.889732 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:33:43.889743 | orchestrator | 2026-02-13 05:33:43.889754 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-13 05:33:43.889782 | orchestrator | Friday 13 February 2026 05:33:41 +0000 (0:00:02.406) 0:00:28.040 ******* 2026-02-13 05:33:43.889795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889839 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:43.889924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598305 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598419 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598529 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-13 05:33:47.598589 | orchestrator | 2026-02-13 05:33:47.598608 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-13 05:33:47.598626 | orchestrator | Friday 13 February 2026 05:33:45 +0000 (0:00:03.390) 0:00:31.431 ******* 2026-02-13 05:33:47.598644 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:33:47.598662 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598678 | orchestrator | } 2026-02-13 05:33:47.598694 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:33:47.598710 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598726 | orchestrator | } 2026-02-13 05:33:47.598742 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:33:47.598758 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598775 | orchestrator | } 2026-02-13 05:33:47.598792 | orchestrator | changed: [testbed-node-3] => { 2026-02-13 05:33:47.598808 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598826 | orchestrator | } 2026-02-13 05:33:47.598910 | orchestrator | changed: [testbed-node-4] => { 2026-02-13 05:33:47.598931 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598947 | orchestrator | } 2026-02-13 05:33:47.598964 | orchestrator | changed: [testbed-node-5] => { 2026-02-13 05:33:47.598980 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:33:47.598998 | orchestrator | } 2026-02-13 05:33:47.599018 | orchestrator | 2026-02-13 05:33:47.599037 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:33:47.599069 | orchestrator | Friday 13 February 2026 05:33:47 +0000 (0:00:01.997) 0:00:33.428 ******* 2026-02-13 05:33:47.599086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:33:47.599106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:33:47.599122 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:33:47.599149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:33:47.599167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:33:47.599198 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:34:18.829750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:34:18.829998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:34:18.830091 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:34:18.830107 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:34:18.830119 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:34:18.830131 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:34:18.830158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:34:18.830194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:34:18.830206 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:34:18.830218 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-13 05:34:18.830238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-13 05:34:18.830252 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:34:18.830265 | orchestrator | 2026-02-13 05:34:18.830278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830292 | orchestrator | Friday 13 February 2026 05:33:49 +0000 (0:00:02.491) 0:00:35.920 ******* 2026-02-13 05:34:18.830305 | orchestrator | 2026-02-13 05:34:18.830317 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830329 | orchestrator | Friday 13 February 2026 05:33:50 +0000 (0:00:00.503) 0:00:36.424 ******* 2026-02-13 05:34:18.830342 | orchestrator | 2026-02-13 05:34:18.830354 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830367 | orchestrator | Friday 13 February 2026 05:33:50 +0000 (0:00:00.521) 0:00:36.946 ******* 2026-02-13 05:34:18.830379 | orchestrator | 2026-02-13 05:34:18.830392 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830404 | orchestrator | Friday 13 February 2026 05:33:51 +0000 (0:00:00.487) 0:00:37.433 ******* 2026-02-13 05:34:18.830417 | orchestrator | 2026-02-13 05:34:18.830430 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830442 | orchestrator | Friday 13 February 2026 05:33:51 +0000 (0:00:00.676) 0:00:38.109 ******* 2026-02-13 05:34:18.830454 | orchestrator | 2026-02-13 05:34:18.830466 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-13 05:34:18.830479 | orchestrator | Friday 13 February 2026 05:33:52 +0000 (0:00:00.511) 0:00:38.620 ******* 2026-02-13 05:34:18.830491 | orchestrator | 2026-02-13 05:34:18.830503 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-13 05:34:18.830516 | orchestrator | Friday 13 February 2026 05:33:53 +0000 (0:00:00.853) 0:00:39.474 ******* 2026-02-13 05:34:18.830528 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:34:18.830541 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:34:18.830558 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:34:18.830572 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:34:18.830584 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:34:18.830596 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:34:18.830608 | orchestrator | 2026-02-13 05:34:18.830619 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-13 05:34:18.830631 | orchestrator | Friday 13 February 2026 05:34:05 +0000 (0:00:11.977) 0:00:51.452 ******* 2026-02-13 05:34:18.830642 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:34:18.830654 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:34:18.830665 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:34:18.830676 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:34:18.830694 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:34:18.830707 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:34:18.830726 | orchestrator | 2026-02-13 05:34:18.830743 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-13 05:34:18.830762 | orchestrator | Friday 13 February 2026 05:34:07 +0000 (0:00:02.229) 0:00:53.681 ******* 2026-02-13 05:34:18.830780 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:34:18.830798 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:34:18.830839 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:34:18.830857 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:34:18.830875 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:34:18.830892 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:34:18.830909 | orchestrator | 2026-02-13 05:34:18.830927 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-13 05:34:18.830958 | orchestrator | Friday 13 February 2026 05:34:18 +0000 (0:00:11.401) 0:01:05.083 ******* 2026-02-13 05:34:34.690400 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-13 05:34:34.690510 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-13 05:34:34.690525 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-13 05:34:34.690537 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-13 05:34:34.690548 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-13 05:34:34.690559 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-13 05:34:34.690570 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-13 05:34:34.690581 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-13 05:34:34.690592 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-13 05:34:34.690603 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-13 05:34:34.690614 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-13 05:34:34.690625 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-13 05:34:34.690636 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690647 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690658 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690668 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690679 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690690 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-13 05:34:34.690701 | orchestrator | 2026-02-13 05:34:34.690714 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-13 05:34:34.690725 | orchestrator | Friday 13 February 2026 05:34:26 +0000 (0:00:07.929) 0:01:13.012 ******* 2026-02-13 05:34:34.690737 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-13 05:34:34.690748 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:34:34.690760 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-13 05:34:34.690797 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:34:34.690886 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-13 05:34:34.690898 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:34:34.690909 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-13 05:34:34.690920 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-13 05:34:34.690931 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-13 05:34:34.690941 | orchestrator | 2026-02-13 05:34:34.690953 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-13 05:34:34.690963 | orchestrator | Friday 13 February 2026 05:34:30 +0000 (0:00:03.270) 0:01:16.283 ******* 2026-02-13 05:34:34.690974 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-13 05:34:34.690985 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:34:34.690996 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-13 05:34:34.691016 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:34:34.691057 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-13 05:34:34.691084 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:34:34.691102 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-13 05:34:34.691120 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-13 05:34:34.691138 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-13 05:34:34.691156 | orchestrator | 2026-02-13 05:34:34.691172 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:34:34.691190 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:34:34.691209 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:34:34.691226 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-13 05:34:34.691246 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:34:34.691288 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:34:34.691309 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:34:34.691329 | orchestrator | 2026-02-13 05:34:34.691347 | orchestrator | 2026-02-13 05:34:34.691367 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:34:34.691385 | orchestrator | Friday 13 February 2026 05:34:34 +0000 (0:00:04.271) 0:01:20.554 ******* 2026-02-13 05:34:34.691404 | orchestrator | =============================================================================== 2026-02-13 05:34:34.691423 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.98s 2026-02-13 05:34:34.691441 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 11.40s 2026-02-13 05:34:34.691453 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.93s 2026-02-13 05:34:34.691464 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.27s 2026-02-13 05:34:34.691474 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.82s 2026-02-13 05:34:34.691485 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 3.55s 2026-02-13 05:34:34.691499 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.48s 2026-02-13 05:34:34.691519 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 3.39s 2026-02-13 05:34:34.691536 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.27s 2026-02-13 05:34:34.691571 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.72s 2026-02-13 05:34:34.691588 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.64s 2026-02-13 05:34:34.691606 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.59s 2026-02-13 05:34:34.691625 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.49s 2026-02-13 05:34:34.691646 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 2.41s 2026-02-13 05:34:34.691665 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.28s 2026-02-13 05:34:34.691684 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.28s 2026-02-13 05:34:34.691704 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.23s 2026-02-13 05:34:34.691724 | orchestrator | module-load : Load modules ---------------------------------------------- 2.17s 2026-02-13 05:34:34.691742 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 2.04s 2026-02-13 05:34:34.691757 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 2.00s 2026-02-13 05:34:35.001517 | orchestrator | + osism apply -a upgrade ovn 2026-02-13 05:34:37.002793 | orchestrator | 2026-02-13 05:34:37 | INFO  | Task 78dd08f0-9e25-4225-8130-9cdecb26235d (ovn) was prepared for execution. 2026-02-13 05:34:37.002951 | orchestrator | 2026-02-13 05:34:37 | INFO  | It takes a moment until task 78dd08f0-9e25-4225-8130-9cdecb26235d (ovn) has been started and output is visible here. 2026-02-13 05:34:49.731044 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-13 05:34:49.731134 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-13 05:34:49.731154 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-13 05:34:49.731161 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-13 05:34:49.731175 | orchestrator | 2026-02-13 05:34:49.731183 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-13 05:34:49.731189 | orchestrator | 2026-02-13 05:34:49.731209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-13 05:34:49.731216 | orchestrator | Friday 13 February 2026 05:34:41 +0000 (0:00:00.908) 0:00:00.908 ******* 2026-02-13 05:34:49.731223 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:34:49.731231 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:34:49.731238 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:34:49.731244 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:34:49.731251 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:34:49.731258 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:34:49.731264 | orchestrator | 2026-02-13 05:34:49.731271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-13 05:34:49.731278 | orchestrator | Friday 13 February 2026 05:34:43 +0000 (0:00:01.487) 0:00:02.396 ******* 2026-02-13 05:34:49.731285 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-13 05:34:49.731292 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-13 05:34:49.731299 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-13 05:34:49.731305 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-13 05:34:49.731312 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-13 05:34:49.731319 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-13 05:34:49.731325 | orchestrator | 2026-02-13 05:34:49.731332 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-13 05:34:49.731339 | orchestrator | 2026-02-13 05:34:49.731346 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-13 05:34:49.731368 | orchestrator | Friday 13 February 2026 05:34:44 +0000 (0:00:01.270) 0:00:03.667 ******* 2026-02-13 05:34:49.731375 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:34:49.731383 | orchestrator | 2026-02-13 05:34:49.731390 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-13 05:34:49.731397 | orchestrator | Friday 13 February 2026 05:34:46 +0000 (0:00:01.810) 0:00:05.477 ******* 2026-02-13 05:34:49.731405 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731415 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731422 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731428 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731448 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731459 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731466 | orchestrator | 2026-02-13 05:34:49.731473 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-13 05:34:49.731480 | orchestrator | Friday 13 February 2026 05:34:47 +0000 (0:00:01.247) 0:00:06.724 ******* 2026-02-13 05:34:49.731487 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731500 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731507 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731514 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731521 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731528 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731534 | orchestrator | 2026-02-13 05:34:49.731541 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-13 05:34:49.731548 | orchestrator | Friday 13 February 2026 05:34:48 +0000 (0:00:01.374) 0:00:08.099 ******* 2026-02-13 05:34:49.731555 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:49.731567 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740574 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740700 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740716 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740726 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740737 | orchestrator | 2026-02-13 05:34:53.740748 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-13 05:34:53.740760 | orchestrator | Friday 13 February 2026 05:34:49 +0000 (0:00:01.005) 0:00:09.105 ******* 2026-02-13 05:34:53.740770 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740780 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740847 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740859 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740890 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740918 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740930 | orchestrator | 2026-02-13 05:34:53.740941 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-13 05:34:53.740952 | orchestrator | Friday 13 February 2026 05:34:51 +0000 (0:00:01.870) 0:00:10.975 ******* 2026-02-13 05:34:53.740964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.740991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.741002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.741014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.741025 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:34:53.741037 | orchestrator | 2026-02-13 05:34:53.741048 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-13 05:34:53.741060 | orchestrator | Friday 13 February 2026 05:34:52 +0000 (0:00:01.338) 0:00:12.314 ******* 2026-02-13 05:34:53.741071 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:34:53.741084 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:34:53.741097 | orchestrator | } 2026-02-13 05:34:53.741109 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:34:53.741129 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:34:53.741141 | orchestrator | } 2026-02-13 05:34:53.741153 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:34:53.741166 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:34:53.741179 | orchestrator | } 2026-02-13 05:34:53.741192 | orchestrator | changed: [testbed-node-3] => { 2026-02-13 05:34:53.741208 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:34:53.741228 | orchestrator | } 2026-02-13 05:34:53.741247 | orchestrator | changed: [testbed-node-4] => { 2026-02-13 05:34:53.741266 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:34:53.741288 | orchestrator | } 2026-02-13 05:34:53.741319 | orchestrator | changed: [testbed-node-5] => { 2026-02-13 05:35:18.526248 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:35:18.526333 | orchestrator | } 2026-02-13 05:35:18.526343 | orchestrator | 2026-02-13 05:35:18.526350 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:35:18.526370 | orchestrator | Friday 13 February 2026 05:34:53 +0000 (0:00:00.797) 0:00:13.112 ******* 2026-02-13 05:35:18.526379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526390 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:35:18.526397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526404 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:35:18.526410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526416 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:35:18.526423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526429 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:35:18.526436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526442 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:35:18.526449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:35:18.526472 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:35:18.526478 | orchestrator | 2026-02-13 05:35:18.526485 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-13 05:35:18.526491 | orchestrator | Friday 13 February 2026 05:34:55 +0000 (0:00:01.672) 0:00:14.784 ******* 2026-02-13 05:35:18.526497 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:35:18.526504 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:35:18.526511 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:35:18.526517 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:35:18.526523 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:35:18.526529 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:35:18.526535 | orchestrator | 2026-02-13 05:35:18.526541 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-13 05:35:18.526547 | orchestrator | Friday 13 February 2026 05:34:57 +0000 (0:00:02.504) 0:00:17.289 ******* 2026-02-13 05:35:18.526554 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-13 05:35:18.526561 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-13 05:35:18.526573 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-13 05:35:18.526591 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-13 05:35:18.526597 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-13 05:35:18.526607 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-13 05:35:18.526613 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-13 05:35:18.526619 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-13 05:35:18.526626 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526632 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526638 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526644 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526650 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526656 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-13 05:35:18.526662 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526669 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526682 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526688 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526694 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-13 05:35:18.526701 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526712 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526719 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526725 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526731 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526737 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-13 05:35:18.526743 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526749 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526755 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526761 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526814 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526824 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-13 05:35:18.526831 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526839 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526846 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526853 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526860 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526867 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-13 05:35:18.526874 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 05:35:18.526881 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 05:35:18.526888 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-13 05:35:18.526895 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 05:35:18.526907 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 05:37:41.580113 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-13 05:37:41.580252 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-13 05:37:41.580278 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-13 05:37:41.580288 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-13 05:37:41.580298 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-13 05:37:41.580307 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-13 05:37:41.580316 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-13 05:37:41.580325 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 05:37:41.580353 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 05:37:41.580363 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-13 05:37:41.580372 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 05:37:41.580383 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 05:37:41.580392 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-13 05:37:41.580401 | orchestrator | 2026-02-13 05:37:41.580411 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580425 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:20.136) 0:00:37.425 ******* 2026-02-13 05:37:41.580439 | orchestrator | 2026-02-13 05:37:41.580454 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580469 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.076) 0:00:37.502 ******* 2026-02-13 05:37:41.580484 | orchestrator | 2026-02-13 05:37:41.580496 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580511 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.076) 0:00:37.578 ******* 2026-02-13 05:37:41.580525 | orchestrator | 2026-02-13 05:37:41.580541 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580557 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.073) 0:00:37.652 ******* 2026-02-13 05:37:41.580572 | orchestrator | 2026-02-13 05:37:41.580589 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580600 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.074) 0:00:37.727 ******* 2026-02-13 05:37:41.580609 | orchestrator | 2026-02-13 05:37:41.580619 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-13 05:37:41.580634 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.071) 0:00:37.798 ******* 2026-02-13 05:37:41.580649 | orchestrator | 2026-02-13 05:37:41.580711 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-13 05:37:41.580726 | orchestrator | Friday 13 February 2026 05:35:18 +0000 (0:00:00.072) 0:00:37.871 ******* 2026-02-13 05:37:41.580743 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:37:41.580758 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:37:41.580772 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:37:41.580785 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:37:41.580799 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:37:41.580813 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:37:41.580828 | orchestrator | 2026-02-13 05:37:41.580843 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-13 05:37:41.580857 | orchestrator | 2026-02-13 05:37:41.580873 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 05:37:41.580889 | orchestrator | Friday 13 February 2026 05:37:29 +0000 (0:02:11.299) 0:02:49.170 ******* 2026-02-13 05:37:41.580905 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:37:41.580922 | orchestrator | 2026-02-13 05:37:41.580933 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 05:37:41.580943 | orchestrator | Friday 13 February 2026 05:37:30 +0000 (0:00:01.159) 0:02:50.330 ******* 2026-02-13 05:37:41.580953 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-13 05:37:41.580964 | orchestrator | 2026-02-13 05:37:41.580974 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-13 05:37:41.580999 | orchestrator | Friday 13 February 2026 05:37:32 +0000 (0:00:01.177) 0:02:51.507 ******* 2026-02-13 05:37:41.581015 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581030 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581043 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581057 | orchestrator | 2026-02-13 05:37:41.581071 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-13 05:37:41.581107 | orchestrator | Friday 13 February 2026 05:37:32 +0000 (0:00:00.840) 0:02:52.347 ******* 2026-02-13 05:37:41.581124 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581139 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581154 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581167 | orchestrator | 2026-02-13 05:37:41.581184 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-13 05:37:41.581193 | orchestrator | Friday 13 February 2026 05:37:33 +0000 (0:00:00.365) 0:02:52.713 ******* 2026-02-13 05:37:41.581202 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581210 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581219 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581228 | orchestrator | 2026-02-13 05:37:41.581237 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-13 05:37:41.581245 | orchestrator | Friday 13 February 2026 05:37:33 +0000 (0:00:00.343) 0:02:53.056 ******* 2026-02-13 05:37:41.581254 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581263 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581272 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581280 | orchestrator | 2026-02-13 05:37:41.581289 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-13 05:37:41.581298 | orchestrator | Friday 13 February 2026 05:37:34 +0000 (0:00:00.620) 0:02:53.676 ******* 2026-02-13 05:37:41.581307 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581316 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581324 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581333 | orchestrator | 2026-02-13 05:37:41.581342 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-13 05:37:41.581351 | orchestrator | Friday 13 February 2026 05:37:34 +0000 (0:00:00.351) 0:02:54.028 ******* 2026-02-13 05:37:41.581359 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:37:41.581368 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:37:41.581377 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:37:41.581386 | orchestrator | 2026-02-13 05:37:41.581395 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-13 05:37:41.581403 | orchestrator | Friday 13 February 2026 05:37:35 +0000 (0:00:00.362) 0:02:54.390 ******* 2026-02-13 05:37:41.581412 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581421 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581430 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581438 | orchestrator | 2026-02-13 05:37:41.581447 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-13 05:37:41.581456 | orchestrator | Friday 13 February 2026 05:37:35 +0000 (0:00:00.772) 0:02:55.162 ******* 2026-02-13 05:37:41.581465 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581473 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581482 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581491 | orchestrator | 2026-02-13 05:37:41.581499 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-13 05:37:41.581508 | orchestrator | Friday 13 February 2026 05:37:36 +0000 (0:00:00.599) 0:02:55.761 ******* 2026-02-13 05:37:41.581517 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581526 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581534 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581543 | orchestrator | 2026-02-13 05:37:41.581552 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-13 05:37:41.581561 | orchestrator | Friday 13 February 2026 05:37:37 +0000 (0:00:00.882) 0:02:56.644 ******* 2026-02-13 05:37:41.581569 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581585 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581595 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581610 | orchestrator | 2026-02-13 05:37:41.581623 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-13 05:37:41.581637 | orchestrator | Friday 13 February 2026 05:37:37 +0000 (0:00:00.368) 0:02:57.013 ******* 2026-02-13 05:37:41.581652 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:37:41.581694 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:37:41.581709 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:37:41.581725 | orchestrator | 2026-02-13 05:37:41.581739 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-13 05:37:41.581753 | orchestrator | Friday 13 February 2026 05:37:38 +0000 (0:00:00.639) 0:02:57.652 ******* 2026-02-13 05:37:41.581767 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:37:41.581780 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:37:41.581794 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:37:41.581809 | orchestrator | 2026-02-13 05:37:41.581825 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-13 05:37:41.581841 | orchestrator | Friday 13 February 2026 05:37:38 +0000 (0:00:00.355) 0:02:58.008 ******* 2026-02-13 05:37:41.581855 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581870 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581880 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581888 | orchestrator | 2026-02-13 05:37:41.581897 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-13 05:37:41.581905 | orchestrator | Friday 13 February 2026 05:37:39 +0000 (0:00:00.771) 0:02:58.779 ******* 2026-02-13 05:37:41.581914 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.581927 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.581940 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.581952 | orchestrator | 2026-02-13 05:37:41.581964 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-13 05:37:41.581986 | orchestrator | Friday 13 February 2026 05:37:39 +0000 (0:00:00.360) 0:02:59.140 ******* 2026-02-13 05:37:41.582002 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.582082 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.582098 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.582111 | orchestrator | 2026-02-13 05:37:41.582124 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-13 05:37:41.582138 | orchestrator | Friday 13 February 2026 05:37:40 +0000 (0:00:01.042) 0:03:00.183 ******* 2026-02-13 05:37:41.582150 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:37:41.582163 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:37:41.582177 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:37:41.582191 | orchestrator | 2026-02-13 05:37:41.582204 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-13 05:37:41.582218 | orchestrator | Friday 13 February 2026 05:37:41 +0000 (0:00:00.422) 0:03:00.605 ******* 2026-02-13 05:37:41.582232 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:37:41.582246 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:37:41.582259 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:37:41.582273 | orchestrator | 2026-02-13 05:37:41.582301 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-13 05:37:51.368229 | orchestrator | Friday 13 February 2026 05:37:41 +0000 (0:00:00.348) 0:03:00.953 ******* 2026-02-13 05:37:51.368357 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:37:51.368375 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:37:51.368405 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:37:51.368428 | orchestrator | 2026-02-13 05:37:51.368440 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-13 05:37:51.368452 | orchestrator | Friday 13 February 2026 05:37:42 +0000 (0:00:00.713) 0:03:01.667 ******* 2026-02-13 05:37:51.368467 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368505 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368518 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368530 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368543 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368554 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368584 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:37:51.368621 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:37:51.368644 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:37:51.368754 | orchestrator | 2026-02-13 05:37:51.368773 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-13 05:37:51.368791 | orchestrator | Friday 13 February 2026 05:37:45 +0000 (0:00:03.104) 0:03:04.772 ******* 2026-02-13 05:37:51.368811 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368831 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:37:51.368872 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.538916 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539037 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539060 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:01.539094 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539109 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:01.539201 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:01.539224 | orchestrator | 2026-02-13 05:38:01.539235 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-13 05:38:01.539245 | orchestrator | Friday 13 February 2026 05:37:51 +0000 (0:00:05.973) 0:03:10.746 ******* 2026-02-13 05:38:01.539254 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-13 05:38:01.539264 | orchestrator | 2026-02-13 05:38:01.539272 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-13 05:38:01.539281 | orchestrator | Friday 13 February 2026 05:37:52 +0000 (0:00:00.941) 0:03:11.687 ******* 2026-02-13 05:38:01.539290 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:38:01.539299 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:38:01.539308 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:38:01.539316 | orchestrator | 2026-02-13 05:38:01.539325 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-13 05:38:01.539334 | orchestrator | Friday 13 February 2026 05:37:53 +0000 (0:00:00.984) 0:03:12.672 ******* 2026-02-13 05:38:01.539342 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:38:01.539351 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:38:01.539360 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:38:01.539368 | orchestrator | 2026-02-13 05:38:01.539377 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-13 05:38:01.539385 | orchestrator | Friday 13 February 2026 05:37:54 +0000 (0:00:01.623) 0:03:14.295 ******* 2026-02-13 05:38:01.539394 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:38:01.539403 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:38:01.539412 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:38:01.539420 | orchestrator | 2026-02-13 05:38:01.539429 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-13 05:38:01.539438 | orchestrator | Friday 13 February 2026 05:37:56 +0000 (0:00:01.942) 0:03:16.238 ******* 2026-02-13 05:38:01.539449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:01.539497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.358839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.358899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:38:04.358937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.358951 | orchestrator | 2026-02-13 05:38:04.358965 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-13 05:38:04.358977 | orchestrator | Friday 13 February 2026 05:38:01 +0000 (0:00:04.673) 0:03:20.912 ******* 2026-02-13 05:38:04.358989 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:38:04.359002 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:38:04.359013 | orchestrator | } 2026-02-13 05:38:04.359024 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:38:04.359035 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:38:04.359046 | orchestrator | } 2026-02-13 05:38:04.359056 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:38:04.359067 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:38:04.359078 | orchestrator | } 2026-02-13 05:38:04.359094 | orchestrator | 2026-02-13 05:38:04.359112 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-13 05:38:04.359130 | orchestrator | Friday 13 February 2026 05:38:01 +0000 (0:00:00.405) 0:03:21.317 ******* 2026-02-13 05:38:04.359150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:38:04.359295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:39:20.996963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:39:20.997109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-13 05:39:20.997138 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-13 05:39:20.997188 | orchestrator | 2026-02-13 05:39:20.997211 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-13 05:39:20.997231 | orchestrator | Friday 13 February 2026 05:38:04 +0000 (0:00:02.413) 0:03:23.731 ******* 2026-02-13 05:39:20.997250 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-13 05:39:20.997270 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-13 05:39:20.997289 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-13 05:39:20.997309 | orchestrator | 2026-02-13 05:39:20.997329 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-13 05:39:20.997349 | orchestrator | Friday 13 February 2026 05:38:05 +0000 (0:00:01.279) 0:03:25.011 ******* 2026-02-13 05:39:20.997369 | orchestrator | changed: [testbed-node-0] => { 2026-02-13 05:39:20.997389 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:39:20.997410 | orchestrator | } 2026-02-13 05:39:20.997430 | orchestrator | changed: [testbed-node-1] => { 2026-02-13 05:39:20.997449 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:39:20.997469 | orchestrator | } 2026-02-13 05:39:20.997492 | orchestrator | changed: [testbed-node-2] => { 2026-02-13 05:39:20.997515 | orchestrator |  "msg": "Notifying handlers" 2026-02-13 05:39:20.997535 | orchestrator | } 2026-02-13 05:39:20.997557 | orchestrator | 2026-02-13 05:39:20.997579 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 05:39:20.997632 | orchestrator | Friday 13 February 2026 05:38:06 +0000 (0:00:00.589) 0:03:25.601 ******* 2026-02-13 05:39:20.997653 | orchestrator | 2026-02-13 05:39:20.997675 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 05:39:20.997698 | orchestrator | Friday 13 February 2026 05:38:06 +0000 (0:00:00.075) 0:03:25.676 ******* 2026-02-13 05:39:20.997720 | orchestrator | 2026-02-13 05:39:20.997742 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-13 05:39:20.997763 | orchestrator | Friday 13 February 2026 05:38:06 +0000 (0:00:00.072) 0:03:25.749 ******* 2026-02-13 05:39:20.997787 | orchestrator | 2026-02-13 05:39:20.997810 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-13 05:39:20.997833 | orchestrator | Friday 13 February 2026 05:38:06 +0000 (0:00:00.072) 0:03:25.821 ******* 2026-02-13 05:39:20.997851 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:39:20.997872 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:39:20.997892 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:39:20.997912 | orchestrator | 2026-02-13 05:39:20.997932 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-13 05:39:20.997951 | orchestrator | Friday 13 February 2026 05:38:21 +0000 (0:00:15.173) 0:03:40.995 ******* 2026-02-13 05:39:20.997972 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:39:20.997992 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:39:20.998121 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:39:20.998153 | orchestrator | 2026-02-13 05:39:20.998171 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-13 05:39:20.998202 | orchestrator | Friday 13 February 2026 05:38:36 +0000 (0:00:15.267) 0:03:56.262 ******* 2026-02-13 05:39:20.998222 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-13 05:39:20.998241 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-13 05:39:20.998258 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-13 05:39:20.998276 | orchestrator | 2026-02-13 05:39:20.998294 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-13 05:39:20.998333 | orchestrator | Friday 13 February 2026 05:38:51 +0000 (0:00:15.000) 0:04:11.263 ******* 2026-02-13 05:39:20.998364 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:39:20.998381 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:39:20.998416 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:39:20.998434 | orchestrator | 2026-02-13 05:39:20.998481 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-13 05:39:20.998498 | orchestrator | Friday 13 February 2026 05:39:07 +0000 (0:00:15.888) 0:04:27.152 ******* 2026-02-13 05:39:20.998516 | orchestrator | Pausing for 5 seconds 2026-02-13 05:39:20.998535 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:39:20.998553 | orchestrator | 2026-02-13 05:39:20.998568 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-13 05:39:20.998585 | orchestrator | Friday 13 February 2026 05:39:12 +0000 (0:00:05.165) 0:04:32.318 ******* 2026-02-13 05:39:20.998628 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:39:20.998646 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:39:20.998663 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:39:20.998681 | orchestrator | 2026-02-13 05:39:20.998699 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-13 05:39:20.998717 | orchestrator | Friday 13 February 2026 05:39:13 +0000 (0:00:00.825) 0:04:33.143 ******* 2026-02-13 05:39:20.998734 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:39:20.998752 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:39:20.998769 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:39:20.998787 | orchestrator | 2026-02-13 05:39:20.998806 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-13 05:39:20.998824 | orchestrator | Friday 13 February 2026 05:39:14 +0000 (0:00:00.761) 0:04:33.904 ******* 2026-02-13 05:39:20.998874 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:39:20.998895 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:39:20.998912 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:39:20.999009 | orchestrator | 2026-02-13 05:39:20.999032 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-13 05:39:20.999050 | orchestrator | Friday 13 February 2026 05:39:15 +0000 (0:00:00.864) 0:04:34.769 ******* 2026-02-13 05:39:20.999070 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:39:20.999090 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:39:20.999106 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:39:20.999117 | orchestrator | 2026-02-13 05:39:20.999128 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-13 05:39:20.999139 | orchestrator | Friday 13 February 2026 05:39:16 +0000 (0:00:00.972) 0:04:35.741 ******* 2026-02-13 05:39:20.999149 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:39:20.999160 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:39:20.999171 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:39:20.999181 | orchestrator | 2026-02-13 05:39:20.999192 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-13 05:39:20.999203 | orchestrator | Friday 13 February 2026 05:39:17 +0000 (0:00:00.878) 0:04:36.620 ******* 2026-02-13 05:39:20.999214 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:39:20.999224 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:39:20.999235 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:39:20.999246 | orchestrator | 2026-02-13 05:39:20.999257 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-13 05:39:20.999268 | orchestrator | Friday 13 February 2026 05:39:18 +0000 (0:00:00.793) 0:04:37.413 ******* 2026-02-13 05:39:20.999278 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-13 05:39:20.999289 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-13 05:39:20.999300 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-13 05:39:20.999311 | orchestrator | 2026-02-13 05:39:20.999321 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 05:39:20.999351 | orchestrator | testbed-node-0 : ok=48  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-13 05:39:20.999364 | orchestrator | testbed-node-1 : ok=49  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-13 05:39:20.999389 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-13 05:39:20.999400 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:39:20.999411 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:39:20.999422 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 05:39:20.999432 | orchestrator | 2026-02-13 05:39:20.999443 | orchestrator | 2026-02-13 05:39:20.999454 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 05:39:20.999465 | orchestrator | Friday 13 February 2026 05:39:20 +0000 (0:00:02.938) 0:04:40.352 ******* 2026-02-13 05:39:20.999476 | orchestrator | =============================================================================== 2026-02-13 05:39:20.999487 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.30s 2026-02-13 05:39:20.999508 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.14s 2026-02-13 05:39:20.999519 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.89s 2026-02-13 05:39:20.999530 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 15.27s 2026-02-13 05:39:20.999541 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 15.17s 2026-02-13 05:39:20.999734 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 15.00s 2026-02-13 05:39:20.999763 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.97s 2026-02-13 05:39:20.999778 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 5.17s 2026-02-13 05:39:20.999804 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 4.67s 2026-02-13 05:39:20.999851 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 3.10s 2026-02-13 05:39:21.398267 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 2.94s 2026-02-13 05:39:21.398354 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.50s 2026-02-13 05:39:21.398364 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.41s 2026-02-13 05:39:21.398372 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 1.94s 2026-02-13 05:39:21.398380 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.87s 2026-02-13 05:39:21.398388 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.81s 2026-02-13 05:39:21.398395 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.67s 2026-02-13 05:39:21.398403 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 1.62s 2026-02-13 05:39:21.398410 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.49s 2026-02-13 05:39:21.398418 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.37s 2026-02-13 05:39:21.708114 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-13 05:39:21.708225 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-13 05:39:21.708243 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-13 05:39:21.714452 | orchestrator | + set -e 2026-02-13 05:39:21.714519 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-13 05:39:21.714532 | orchestrator | ++ export INTERACTIVE=false 2026-02-13 05:39:21.714545 | orchestrator | ++ INTERACTIVE=false 2026-02-13 05:39:21.714556 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-13 05:39:21.714567 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-13 05:39:21.714579 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-13 05:39:23.769106 | orchestrator | 2026-02-13 05:39:23 | INFO  | Task 032c0009-7605-472d-997d-70082f0b2ec5 (ceph-rolling_update) was prepared for execution. 2026-02-13 05:39:23.769235 | orchestrator | 2026-02-13 05:39:23 | INFO  | It takes a moment until task 032c0009-7605-472d-997d-70082f0b2ec5 (ceph-rolling_update) has been started and output is visible here. 2026-02-13 05:40:19.960284 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 05:40:19.960470 | orchestrator | 2.16.14 2026-02-13 05:40:19.960502 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-13 05:40:19.960524 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-13 05:40:19.960656 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-13 05:40:19.960680 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-13 05:40:19.960722 | orchestrator | 2026-02-13 05:40:19.960743 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-13 05:40:19.960764 | orchestrator | 2026-02-13 05:40:19.960785 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-13 05:40:19.960808 | orchestrator | Friday 13 February 2026 05:39:31 +0000 (0:00:01.147) 0:00:01.147 ******* 2026-02-13 05:40:19.960833 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-13 05:40:19.960855 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-13 05:40:19.960879 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-13 05:40:19.960904 | orchestrator | skipping: [localhost] 2026-02-13 05:40:19.960924 | orchestrator | 2026-02-13 05:40:19.960945 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-13 05:40:19.960967 | orchestrator | 2026-02-13 05:40:19.960988 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-13 05:40:19.961009 | orchestrator | Friday 13 February 2026 05:39:32 +0000 (0:00:00.939) 0:00:02.086 ******* 2026-02-13 05:40:19.961029 | orchestrator | ok: [testbed-node-0] => { 2026-02-13 05:40:19.961050 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961071 | orchestrator | } 2026-02-13 05:40:19.961093 | orchestrator | ok: [testbed-node-1] => { 2026-02-13 05:40:19.961114 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961135 | orchestrator | } 2026-02-13 05:40:19.961155 | orchestrator | ok: [testbed-node-2] => { 2026-02-13 05:40:19.961174 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961194 | orchestrator | } 2026-02-13 05:40:19.961213 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 05:40:19.961232 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961251 | orchestrator | } 2026-02-13 05:40:19.961270 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 05:40:19.961311 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961330 | orchestrator | } 2026-02-13 05:40:19.961347 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 05:40:19.961365 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961383 | orchestrator | } 2026-02-13 05:40:19.961400 | orchestrator | ok: [testbed-manager] => { 2026-02-13 05:40:19.961418 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 05:40:19.961436 | orchestrator | } 2026-02-13 05:40:19.961454 | orchestrator | 2026-02-13 05:40:19.961472 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-13 05:40:19.961491 | orchestrator | Friday 13 February 2026 05:39:34 +0000 (0:00:02.077) 0:00:04.164 ******* 2026-02-13 05:40:19.961509 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:19.961555 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:19.961603 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:19.961622 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:19.961639 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:19.961656 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:19.961674 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.961691 | orchestrator | 2026-02-13 05:40:19.961706 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-13 05:40:19.961723 | orchestrator | Friday 13 February 2026 05:39:37 +0000 (0:00:03.556) 0:00:07.720 ******* 2026-02-13 05:40:19.961741 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:40:19.961759 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 05:40:19.961775 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 05:40:19.961793 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:40:19.961812 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 05:40:19.961831 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:40:19.961849 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 05:40:19.961868 | orchestrator | 2026-02-13 05:40:19.961886 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-13 05:40:19.961904 | orchestrator | Friday 13 February 2026 05:40:07 +0000 (0:00:29.364) 0:00:37.085 ******* 2026-02-13 05:40:19.961923 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.961943 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.961963 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.961981 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962000 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.962115 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.962146 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.962166 | orchestrator | 2026-02-13 05:40:19.962184 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 05:40:19.962200 | orchestrator | Friday 13 February 2026 05:40:08 +0000 (0:00:00.941) 0:00:38.027 ******* 2026-02-13 05:40:19.962238 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 05:40:19.962252 | orchestrator | 2026-02-13 05:40:19.962263 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 05:40:19.962274 | orchestrator | Friday 13 February 2026 05:40:10 +0000 (0:00:01.853) 0:00:39.881 ******* 2026-02-13 05:40:19.962285 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.962296 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.962307 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.962317 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962328 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.962339 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.962350 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.962360 | orchestrator | 2026-02-13 05:40:19.962371 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 05:40:19.962382 | orchestrator | Friday 13 February 2026 05:40:11 +0000 (0:00:01.360) 0:00:41.241 ******* 2026-02-13 05:40:19.962393 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.962404 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.962415 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.962425 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962436 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.962446 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.962457 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.962468 | orchestrator | 2026-02-13 05:40:19.962479 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 05:40:19.962493 | orchestrator | Friday 13 February 2026 05:40:12 +0000 (0:00:00.789) 0:00:42.030 ******* 2026-02-13 05:40:19.962540 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.962596 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.962615 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.962633 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962651 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.962667 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.962682 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.962698 | orchestrator | 2026-02-13 05:40:19.962714 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 05:40:19.962729 | orchestrator | Friday 13 February 2026 05:40:13 +0000 (0:00:01.322) 0:00:43.353 ******* 2026-02-13 05:40:19.962746 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.962763 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.962779 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.962795 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962813 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.962829 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.962846 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.962863 | orchestrator | 2026-02-13 05:40:19.962879 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 05:40:19.962897 | orchestrator | Friday 13 February 2026 05:40:14 +0000 (0:00:00.752) 0:00:44.105 ******* 2026-02-13 05:40:19.962913 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.962930 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.962947 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.962977 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.962994 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.963010 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.963027 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.963121 | orchestrator | 2026-02-13 05:40:19.963142 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 05:40:19.963160 | orchestrator | Friday 13 February 2026 05:40:15 +0000 (0:00:01.045) 0:00:45.151 ******* 2026-02-13 05:40:19.963178 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.963194 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.963211 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.963221 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.963231 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.963240 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.963250 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.963260 | orchestrator | 2026-02-13 05:40:19.963270 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 05:40:19.963279 | orchestrator | Friday 13 February 2026 05:40:16 +0000 (0:00:00.760) 0:00:45.911 ******* 2026-02-13 05:40:19.963289 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:19.963300 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:19.963309 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:19.963319 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:19.963329 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:19.963339 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:19.963348 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:19.963358 | orchestrator | 2026-02-13 05:40:19.963368 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 05:40:19.963378 | orchestrator | Friday 13 February 2026 05:40:17 +0000 (0:00:01.002) 0:00:46.914 ******* 2026-02-13 05:40:19.963388 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.963397 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.963407 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.963417 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.963426 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.963436 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.963446 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.963456 | orchestrator | 2026-02-13 05:40:19.963465 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 05:40:19.963491 | orchestrator | Friday 13 February 2026 05:40:17 +0000 (0:00:00.777) 0:00:47.692 ******* 2026-02-13 05:40:19.963508 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:40:19.963526 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:40:19.963542 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:40:19.963595 | orchestrator | 2026-02-13 05:40:19.963614 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 05:40:19.963632 | orchestrator | Friday 13 February 2026 05:40:19 +0000 (0:00:01.201) 0:00:48.893 ******* 2026-02-13 05:40:19.963648 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:19.963664 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:19.963680 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:19.963697 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:19.963715 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:19.963731 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:19.963747 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:19.963765 | orchestrator | 2026-02-13 05:40:19.963781 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 05:40:19.963812 | orchestrator | Friday 13 February 2026 05:40:19 +0000 (0:00:00.920) 0:00:49.814 ******* 2026-02-13 05:40:31.252096 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:40:31.252219 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:40:31.252239 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:40:31.252250 | orchestrator | 2026-02-13 05:40:31.252264 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 05:40:31.252273 | orchestrator | Friday 13 February 2026 05:40:22 +0000 (0:00:02.183) 0:00:51.997 ******* 2026-02-13 05:40:31.252280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:40:31.252288 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:40:31.252294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:40:31.252301 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.252307 | orchestrator | 2026-02-13 05:40:31.252314 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 05:40:31.252320 | orchestrator | Friday 13 February 2026 05:40:22 +0000 (0:00:00.392) 0:00:52.390 ******* 2026-02-13 05:40:31.252328 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252337 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252344 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252350 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.252356 | orchestrator | 2026-02-13 05:40:31.252363 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 05:40:31.252369 | orchestrator | Friday 13 February 2026 05:40:23 +0000 (0:00:00.851) 0:00:53.241 ******* 2026-02-13 05:40:31.252396 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252434 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:31.252458 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.252468 | orchestrator | 2026-02-13 05:40:31.252478 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 05:40:31.252488 | orchestrator | Friday 13 February 2026 05:40:23 +0000 (0:00:00.167) 0:00:53.409 ******* 2026-02-13 05:40:31.252500 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9a39aafafb69', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 05:40:20.622703', 'end': '2026-02-13 05:40:20.668994', 'delta': '0:00:00.046291', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a39aafafb69'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-13 05:40:31.252537 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8f8955ec790', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 05:40:21.169937', 'end': '2026-02-13 05:40:21.217482', 'delta': '0:00:00.047545', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8f8955ec790'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-13 05:40:31.252549 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '30f78d02966b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 05:40:21.926500', 'end': '2026-02-13 05:40:21.980720', 'delta': '0:00:00.054220', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['30f78d02966b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-13 05:40:31.252631 | orchestrator | 2026-02-13 05:40:31.252639 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 05:40:31.252647 | orchestrator | Friday 13 February 2026 05:40:23 +0000 (0:00:00.424) 0:00:53.833 ******* 2026-02-13 05:40:31.252654 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:31.252662 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:31.252669 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:31.252676 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:31.252690 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:31.252708 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:31.252715 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:31.252722 | orchestrator | 2026-02-13 05:40:31.252743 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 05:40:31.252750 | orchestrator | Friday 13 February 2026 05:40:24 +0000 (0:00:00.933) 0:00:54.766 ******* 2026-02-13 05:40:31.252757 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.252764 | orchestrator | 2026-02-13 05:40:31.252771 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 05:40:31.252779 | orchestrator | Friday 13 February 2026 05:40:25 +0000 (0:00:00.270) 0:00:55.037 ******* 2026-02-13 05:40:31.252786 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:31.252793 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:31.252800 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:31.252807 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:31.252814 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:31.252821 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:31.252828 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:31.252835 | orchestrator | 2026-02-13 05:40:31.252842 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 05:40:31.252849 | orchestrator | Friday 13 February 2026 05:40:26 +0000 (0:00:01.015) 0:00:56.053 ******* 2026-02-13 05:40:31.252857 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:31.252864 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252871 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252878 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252885 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252892 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252899 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-13 05:40:31.252907 | orchestrator | 2026-02-13 05:40:31.252913 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 05:40:31.252921 | orchestrator | Friday 13 February 2026 05:40:28 +0000 (0:00:02.411) 0:00:58.464 ******* 2026-02-13 05:40:31.252928 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:31.252934 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:31.252940 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:31.252946 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:31.252952 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:31.252959 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:31.252965 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:31.252971 | orchestrator | 2026-02-13 05:40:31.252977 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 05:40:31.252983 | orchestrator | Friday 13 February 2026 05:40:29 +0000 (0:00:00.976) 0:00:59.441 ******* 2026-02-13 05:40:31.252990 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.252996 | orchestrator | 2026-02-13 05:40:31.253002 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 05:40:31.253008 | orchestrator | Friday 13 February 2026 05:40:29 +0000 (0:00:00.135) 0:00:59.577 ******* 2026-02-13 05:40:31.253015 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.253021 | orchestrator | 2026-02-13 05:40:31.253027 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 05:40:31.253033 | orchestrator | Friday 13 February 2026 05:40:29 +0000 (0:00:00.225) 0:00:59.802 ******* 2026-02-13 05:40:31.253039 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:31.253046 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:31.253052 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:31.253058 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:31.253064 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:31.253076 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513243 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513343 | orchestrator | 2026-02-13 05:40:36.513381 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 05:40:36.513393 | orchestrator | Friday 13 February 2026 05:40:31 +0000 (0:00:01.305) 0:01:01.108 ******* 2026-02-13 05:40:36.513403 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513413 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513423 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.513433 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.513442 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.513451 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513461 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513471 | orchestrator | 2026-02-13 05:40:36.513481 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 05:40:36.513490 | orchestrator | Friday 13 February 2026 05:40:31 +0000 (0:00:00.753) 0:01:01.861 ******* 2026-02-13 05:40:36.513500 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513510 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513519 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.513529 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.513538 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.513548 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513604 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513614 | orchestrator | 2026-02-13 05:40:36.513624 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 05:40:36.513634 | orchestrator | Friday 13 February 2026 05:40:32 +0000 (0:00:00.934) 0:01:02.796 ******* 2026-02-13 05:40:36.513644 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513653 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513663 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.513672 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.513682 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.513691 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513701 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513710 | orchestrator | 2026-02-13 05:40:36.513720 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 05:40:36.513730 | orchestrator | Friday 13 February 2026 05:40:33 +0000 (0:00:00.745) 0:01:03.542 ******* 2026-02-13 05:40:36.513740 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513749 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513759 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.513768 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.513778 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.513788 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513799 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513810 | orchestrator | 2026-02-13 05:40:36.513836 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 05:40:36.513849 | orchestrator | Friday 13 February 2026 05:40:34 +0000 (0:00:00.981) 0:01:04.524 ******* 2026-02-13 05:40:36.513860 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513871 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513882 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.513894 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.513904 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.513915 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.513926 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.513937 | orchestrator | 2026-02-13 05:40:36.513950 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 05:40:36.513962 | orchestrator | Friday 13 February 2026 05:40:35 +0000 (0:00:00.743) 0:01:05.267 ******* 2026-02-13 05:40:36.513974 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.513985 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.513996 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:36.514007 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:36.514078 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:36.514096 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:36.514114 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:36.514131 | orchestrator | 2026-02-13 05:40:36.514147 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 05:40:36.514165 | orchestrator | Friday 13 February 2026 05:40:36 +0000 (0:00:00.969) 0:01:06.236 ******* 2026-02-13 05:40:36.514185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:36.514278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.514319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:36.514347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.792852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.792959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.792976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:36.793058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e7782c1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:36.793154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793179 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:36.793192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.793235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:36.934324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '70bc5ce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:36.934507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934639 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:36.934668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'uuids': ['e40d66eb-8e66-4883-be8d-d975a39e8f71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE']}})  2026-02-13 05:40:36.934711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4e1fd529', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:36.934724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f']}})  2026-02-13 05:40:36.934737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934748 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:36.934769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:37.086999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087094 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:37.087112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM', 'dm-uuid-CRYPT-LUKS2-f8c9b83f530a4ae8b2d9ba3a7349e63b-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'uuids': ['f8c9b83f-530a-4ae8-b2d9-ba3a7349e63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM']}})  2026-02-13 05:40:37.087209 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab']}})  2026-02-13 05:40:37.087228 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd82ec97d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:37.087315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087368 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE', 'dm-uuid-CRYPT-LUKS2-e40d66eb8e664883be8dd975a39e8f71-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.087388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'uuids': ['08a6103f-7fcb-4231-b947-0f95a49b9065'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g']}})  2026-02-13 05:40:37.087418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b26d7d0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:37.273009 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6']}})  2026-02-13 05:40:37.273125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:37.273170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI', 'dm-uuid-CRYPT-LUKS2-b79d0c525d1a4583b35f4aeb5a2ac24e-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273194 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'uuids': ['b79d0c52-5d1a-4583-b35f-4aeb5a2ac24e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI']}})  2026-02-13 05:40:37.273318 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:37.273349 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f']}})  2026-02-13 05:40:37.273370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.273395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6ae2313', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:37.273443 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g', 'dm-uuid-CRYPT-LUKS2-08a6103f7fcb4231b9470f95a49b9065-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'uuids': ['3a3054ab-e73d-4dec-b96d-e7c980380425'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw']}})  2026-02-13 05:40:37.400603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53853b9a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:37.400609 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6']}})  2026-02-13 05:40:37.400631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400654 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:37.400658 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT', 'dm-uuid-CRYPT-LUKS2-6c8d9b65364e41e0b393c831fad91b63-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.400670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'uuids': ['6c8d9b65-364e-41e0-b393-c831fad91b63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT']}})  2026-02-13 05:40:37.400678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1']}})  2026-02-13 05:40:37.400687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd8b8514', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:37.636451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw', 'dm-uuid-CRYPT-LUKS2-3a3054abe73d4decb96de7c980380425-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636518 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:37.636530 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:37.636542 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636655 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636676 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-26-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:40:37.636700 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636711 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:37.636757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f5b10e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:40:38.025024 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:38.025114 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:40:38.025127 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:38.025138 | orchestrator | 2026-02-13 05:40:38.025148 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 05:40:38.025157 | orchestrator | Friday 13 February 2026 05:40:37 +0000 (0:00:01.259) 0:01:07.495 ******* 2026-02-13 05:40:38.025168 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025210 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025259 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025268 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025276 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025293 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.025313 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173498 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173631 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:38.173647 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173684 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173693 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173701 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173721 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173745 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173753 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173769 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e7782c1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173782 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.173794 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430292 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:38.430396 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430416 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430429 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430443 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430500 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430632 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430662 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '70bc5ce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430687 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430700 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.430729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553144 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'uuids': ['e40d66eb-8e66-4883-be8d-d975a39e8f71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4e1fd529', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553285 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553302 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:38.553316 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553365 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553410 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553453 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM', 'dm-uuid-CRYPT-LUKS2-f8c9b83f530a4ae8b2d9ba3a7349e63b-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553495 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'uuids': ['f8c9b83f-530a-4ae8-b2d9-ba3a7349e63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.553525 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639476 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639599 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'uuids': ['08a6103f-7fcb-4231-b947-0f95a49b9065'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639664 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd82ec97d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b26d7d0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639701 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639730 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639738 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.639751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE', 'dm-uuid-CRYPT-LUKS2-e40d66eb8e664883be8dd975a39e8f71-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.751926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752032 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752065 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752098 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI', 'dm-uuid-CRYPT-LUKS2-b79d0c525d1a4583b35f4aeb5a2ac24e-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752111 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752143 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'uuids': ['b79d0c52-5d1a-4583-b35f-4aeb5a2ac24e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752158 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752175 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:38.752194 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.752224 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6ae2313', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837513 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837590 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g', 'dm-uuid-CRYPT-LUKS2-08a6103f7fcb4231b9470f95a49b9065-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837608 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837618 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'uuids': ['3a3054ab-e73d-4dec-b96d-e7c980380425'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837644 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53853b9a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837656 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837678 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837688 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837698 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837707 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.837736 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT', 'dm-uuid-CRYPT-LUKS2-6c8d9b65364e41e0b393c831fad91b63-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.917772 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.917902 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:38.917936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'uuids': ['6c8d9b65-364e-41e0-b393-c831fad91b63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.917951 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1']}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.917966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918005 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd8b8514', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918089 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918103 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918115 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918127 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:38.918148 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.341976 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-26-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw', 'dm-uuid-CRYPT-LUKS2-3a3054abe73d4decb96de7c980380425-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342223 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342245 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342265 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:42.342288 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342355 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f5b10e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342412 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342435 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:40:42.342453 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:42.342471 | orchestrator | 2026-02-13 05:40:42.342493 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 05:40:42.342515 | orchestrator | Friday 13 February 2026 05:40:39 +0000 (0:00:01.434) 0:01:08.929 ******* 2026-02-13 05:40:42.342535 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:42.342588 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:42.342609 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:42.342628 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:42.342646 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:42.342679 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:42.342700 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:42.342719 | orchestrator | 2026-02-13 05:40:42.342739 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 05:40:42.342757 | orchestrator | Friday 13 February 2026 05:40:40 +0000 (0:00:01.324) 0:01:10.254 ******* 2026-02-13 05:40:42.342775 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:42.342793 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:42.342812 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:42.342830 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:42.342849 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:42.342869 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:42.342888 | orchestrator | ok: [testbed-manager] 2026-02-13 05:40:42.342907 | orchestrator | 2026-02-13 05:40:42.342926 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 05:40:42.342945 | orchestrator | Friday 13 February 2026 05:40:41 +0000 (0:00:00.722) 0:01:10.976 ******* 2026-02-13 05:40:42.342965 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:40:42.342983 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:40:42.343001 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:40:42.343019 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:42.343038 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:42.343057 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:42.343091 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:55.109150 | orchestrator | 2026-02-13 05:40:55.109320 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 05:40:55.109351 | orchestrator | Friday 13 February 2026 05:40:42 +0000 (0:00:01.226) 0:01:12.202 ******* 2026-02-13 05:40:55.109371 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:55.109393 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:55.109412 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:55.109426 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.109437 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.109449 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.109460 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:55.109472 | orchestrator | 2026-02-13 05:40:55.109484 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 05:40:55.109517 | orchestrator | Friday 13 February 2026 05:40:43 +0000 (0:00:00.737) 0:01:12.939 ******* 2026-02-13 05:40:55.109528 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:55.109539 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:55.109582 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:55.109594 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.109604 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.109615 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.109626 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-13 05:40:55.109638 | orchestrator | 2026-02-13 05:40:55.109651 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 05:40:55.109663 | orchestrator | Friday 13 February 2026 05:40:44 +0000 (0:00:01.565) 0:01:14.504 ******* 2026-02-13 05:40:55.109676 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:55.109688 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:55.109700 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:55.109713 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.109725 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.109737 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.109750 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:55.109762 | orchestrator | 2026-02-13 05:40:55.109774 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 05:40:55.109787 | orchestrator | Friday 13 February 2026 05:40:45 +0000 (0:00:00.816) 0:01:15.321 ******* 2026-02-13 05:40:55.109801 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:40:55.109814 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-13 05:40:55.109853 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 05:40:55.109865 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-13 05:40:55.109878 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-13 05:40:55.109890 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-13 05:40:55.109902 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 05:40:55.109914 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-13 05:40:55.109927 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-13 05:40:55.109939 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-13 05:40:55.109952 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-13 05:40:55.109964 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-13 05:40:55.109977 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-13 05:40:55.109988 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-13 05:40:55.110001 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-13 05:40:55.110011 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-13 05:40:55.110085 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-13 05:40:55.110096 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-13 05:40:55.110107 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-13 05:40:55.110118 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-13 05:40:55.110129 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-13 05:40:55.110140 | orchestrator | 2026-02-13 05:40:55.110151 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 05:40:55.110162 | orchestrator | Friday 13 February 2026 05:40:47 +0000 (0:00:01.862) 0:01:17.183 ******* 2026-02-13 05:40:55.110173 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:40:55.110184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:40:55.110195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:40:55.110206 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:55.110216 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-13 05:40:55.110246 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-13 05:40:55.110265 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-13 05:40:55.110285 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:55.110304 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-13 05:40:55.110323 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-13 05:40:55.110340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-13 05:40:55.110351 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:55.110362 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 05:40:55.110373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 05:40:55.110383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 05:40:55.110394 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.110405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-13 05:40:55.110416 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-13 05:40:55.110426 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-13 05:40:55.110437 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.110449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-13 05:40:55.110483 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-13 05:40:55.110495 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-13 05:40:55.110506 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.110517 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-13 05:40:55.110528 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-13 05:40:55.110582 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-13 05:40:55.110599 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:55.110616 | orchestrator | 2026-02-13 05:40:55.110633 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 05:40:55.110652 | orchestrator | Friday 13 February 2026 05:40:48 +0000 (0:00:01.359) 0:01:18.543 ******* 2026-02-13 05:40:55.110682 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:40:55.110702 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:40:55.110714 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:40:55.110725 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:40:55.110737 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:40:55.110748 | orchestrator | 2026-02-13 05:40:55.110760 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 05:40:55.110785 | orchestrator | Friday 13 February 2026 05:40:49 +0000 (0:00:00.965) 0:01:19.508 ******* 2026-02-13 05:40:55.110796 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.110807 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.110818 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.110829 | orchestrator | 2026-02-13 05:40:55.110840 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 05:40:55.110851 | orchestrator | Friday 13 February 2026 05:40:50 +0000 (0:00:00.579) 0:01:20.088 ******* 2026-02-13 05:40:55.110862 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.110873 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.110883 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.110894 | orchestrator | 2026-02-13 05:40:55.110905 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 05:40:55.110916 | orchestrator | Friday 13 February 2026 05:40:50 +0000 (0:00:00.378) 0:01:20.466 ******* 2026-02-13 05:40:55.110927 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.110938 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:40:55.110949 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:40:55.110960 | orchestrator | 2026-02-13 05:40:55.110970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 05:40:55.110981 | orchestrator | Friday 13 February 2026 05:40:50 +0000 (0:00:00.397) 0:01:20.864 ******* 2026-02-13 05:40:55.110992 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:55.111003 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:55.111014 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:55.111025 | orchestrator | 2026-02-13 05:40:55.111036 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 05:40:55.111047 | orchestrator | Friday 13 February 2026 05:40:51 +0000 (0:00:00.485) 0:01:21.349 ******* 2026-02-13 05:40:55.111058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 05:40:55.111068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 05:40:55.111079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 05:40:55.111090 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.111101 | orchestrator | 2026-02-13 05:40:55.111112 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 05:40:55.111123 | orchestrator | Friday 13 February 2026 05:40:51 +0000 (0:00:00.400) 0:01:21.750 ******* 2026-02-13 05:40:55.111134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 05:40:55.111145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 05:40:55.111156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 05:40:55.111167 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.111178 | orchestrator | 2026-02-13 05:40:55.111188 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 05:40:55.111208 | orchestrator | Friday 13 February 2026 05:40:52 +0000 (0:00:00.644) 0:01:22.395 ******* 2026-02-13 05:40:55.111219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 05:40:55.111230 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 05:40:55.111242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 05:40:55.111261 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:40:55.111293 | orchestrator | 2026-02-13 05:40:55.111312 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 05:40:55.111333 | orchestrator | Friday 13 February 2026 05:40:53 +0000 (0:00:00.641) 0:01:23.036 ******* 2026-02-13 05:40:55.111353 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:40:55.111373 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:40:55.111388 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:40:55.111399 | orchestrator | 2026-02-13 05:40:55.111410 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 05:40:55.111421 | orchestrator | Friday 13 February 2026 05:40:53 +0000 (0:00:00.583) 0:01:23.619 ******* 2026-02-13 05:40:55.111431 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 05:40:55.111442 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-13 05:40:55.111453 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-13 05:40:55.111464 | orchestrator | 2026-02-13 05:40:55.111475 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 05:40:55.111485 | orchestrator | Friday 13 February 2026 05:40:54 +0000 (0:00:00.571) 0:01:24.191 ******* 2026-02-13 05:40:55.111496 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:40:55.111507 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:40:55.111519 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:40:55.111540 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 05:41:24.096290 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 05:41:24.096439 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 05:41:24.096459 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 05:41:24.096472 | orchestrator | 2026-02-13 05:41:24.096485 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 05:41:24.096498 | orchestrator | Friday 13 February 2026 05:40:55 +0000 (0:00:00.777) 0:01:24.968 ******* 2026-02-13 05:41:24.096525 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:41:24.096597 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:41:24.096609 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:41:24.096620 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 05:41:24.096632 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 05:41:24.096643 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 05:41:24.096654 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 05:41:24.096665 | orchestrator | 2026-02-13 05:41:24.096676 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-13 05:41:24.096687 | orchestrator | Friday 13 February 2026 05:40:57 +0000 (0:00:02.236) 0:01:27.204 ******* 2026-02-13 05:41:24.096702 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:41:24.096723 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:41:24.096741 | orchestrator | changed: [testbed-manager] 2026-02-13 05:41:24.096759 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:41:24.096776 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:41:24.096794 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:41:24.096841 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:41:24.096862 | orchestrator | 2026-02-13 05:41:24.096880 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-13 05:41:24.096899 | orchestrator | Friday 13 February 2026 05:41:07 +0000 (0:00:10.180) 0:01:37.384 ******* 2026-02-13 05:41:24.096919 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.096938 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.096958 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.096979 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.096998 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.097017 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.097037 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097048 | orchestrator | 2026-02-13 05:41:24.097059 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-13 05:41:24.097071 | orchestrator | Friday 13 February 2026 05:41:08 +0000 (0:00:00.979) 0:01:38.364 ******* 2026-02-13 05:41:24.097081 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.097092 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.097103 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.097114 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.097125 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.097136 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.097146 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097157 | orchestrator | 2026-02-13 05:41:24.097168 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-13 05:41:24.097179 | orchestrator | Friday 13 February 2026 05:41:09 +0000 (0:00:00.768) 0:01:39.132 ******* 2026-02-13 05:41:24.097190 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097201 | orchestrator | changed: [testbed-node-2] 2026-02-13 05:41:24.097212 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:41:24.097223 | orchestrator | changed: [testbed-node-1] 2026-02-13 05:41:24.097233 | orchestrator | changed: [testbed-node-3] 2026-02-13 05:41:24.097244 | orchestrator | changed: [testbed-node-4] 2026-02-13 05:41:24.097255 | orchestrator | changed: [testbed-node-5] 2026-02-13 05:41:24.097266 | orchestrator | 2026-02-13 05:41:24.097278 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-13 05:41:24.097289 | orchestrator | Friday 13 February 2026 05:41:11 +0000 (0:00:02.330) 0:01:41.463 ******* 2026-02-13 05:41:24.097302 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 05:41:24.097314 | orchestrator | 2026-02-13 05:41:24.097325 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-13 05:41:24.097337 | orchestrator | Friday 13 February 2026 05:41:13 +0000 (0:00:01.872) 0:01:43.335 ******* 2026-02-13 05:41:24.097349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.097367 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.097386 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.097404 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.097422 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.097440 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.097458 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097475 | orchestrator | 2026-02-13 05:41:24.097495 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-13 05:41:24.097513 | orchestrator | Friday 13 February 2026 05:41:14 +0000 (0:00:00.976) 0:01:44.312 ******* 2026-02-13 05:41:24.097562 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.097583 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.097601 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.097618 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.097637 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.097656 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.097674 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097711 | orchestrator | 2026-02-13 05:41:24.097730 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-13 05:41:24.097771 | orchestrator | Friday 13 February 2026 05:41:15 +0000 (0:00:00.963) 0:01:45.275 ******* 2026-02-13 05:41:24.097783 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.097794 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.097805 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.097815 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.097826 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.097836 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.097847 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.097858 | orchestrator | 2026-02-13 05:41:24.097869 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-13 05:41:24.097880 | orchestrator | Friday 13 February 2026 05:41:16 +0000 (0:00:00.778) 0:01:46.054 ******* 2026-02-13 05:41:24.097909 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.097928 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.097948 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.097967 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.097986 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098005 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098081 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098095 | orchestrator | 2026-02-13 05:41:24.098114 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-13 05:41:24.098134 | orchestrator | Friday 13 February 2026 05:41:17 +0000 (0:00:01.000) 0:01:47.054 ******* 2026-02-13 05:41:24.098154 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.098173 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.098189 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.098208 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.098225 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098245 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098262 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098281 | orchestrator | 2026-02-13 05:41:24.098300 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-13 05:41:24.098320 | orchestrator | Friday 13 February 2026 05:41:17 +0000 (0:00:00.790) 0:01:47.845 ******* 2026-02-13 05:41:24.098339 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.098355 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.098366 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.098376 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.098387 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098398 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098408 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098419 | orchestrator | 2026-02-13 05:41:24.098430 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-13 05:41:24.098441 | orchestrator | Friday 13 February 2026 05:41:18 +0000 (0:00:01.012) 0:01:48.857 ******* 2026-02-13 05:41:24.098452 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.098463 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.098474 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.098484 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.098495 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098506 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098516 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098527 | orchestrator | 2026-02-13 05:41:24.098626 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-13 05:41:24.098638 | orchestrator | Friday 13 February 2026 05:41:19 +0000 (0:00:00.751) 0:01:49.609 ******* 2026-02-13 05:41:24.098648 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.098673 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.098694 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.098723 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.098742 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098761 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098778 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098795 | orchestrator | 2026-02-13 05:41:24.098814 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-13 05:41:24.098832 | orchestrator | Friday 13 February 2026 05:41:20 +0000 (0:00:01.026) 0:01:50.635 ******* 2026-02-13 05:41:24.098848 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.098865 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.098883 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.098901 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.098919 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.098937 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.098955 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.098973 | orchestrator | 2026-02-13 05:41:24.098992 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-13 05:41:24.099011 | orchestrator | Friday 13 February 2026 05:41:21 +0000 (0:00:00.952) 0:01:51.588 ******* 2026-02-13 05:41:24.099030 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.099048 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.099066 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.099084 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.099102 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.099121 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.099140 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.099160 | orchestrator | 2026-02-13 05:41:24.099181 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-13 05:41:24.099200 | orchestrator | Friday 13 February 2026 05:41:22 +0000 (0:00:00.769) 0:01:52.358 ******* 2026-02-13 05:41:24.099220 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.099239 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.099259 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.099279 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.099297 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:24.099316 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:24.099337 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:24.099356 | orchestrator | 2026-02-13 05:41:24.099370 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-13 05:41:24.099381 | orchestrator | Friday 13 February 2026 05:41:23 +0000 (0:00:00.972) 0:01:53.330 ******* 2026-02-13 05:41:24.099392 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:24.099403 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:24.099415 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:24.099426 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:24.099455 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.520163 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.520276 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.520293 | orchestrator | 2026-02-13 05:41:33.520306 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-13 05:41:33.520318 | orchestrator | Friday 13 February 2026 05:41:24 +0000 (0:00:00.750) 0:01:54.081 ******* 2026-02-13 05:41:33.520329 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.520341 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.520352 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.520381 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 05:41:33.520395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 05:41:33.520406 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.520417 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 05:41:33.520450 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 05:41:33.520461 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.520473 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 05:41:33.520484 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 05:41:33.520495 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.520506 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.520517 | orchestrator | 2026-02-13 05:41:33.520775 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-13 05:41:33.520802 | orchestrator | Friday 13 February 2026 05:41:25 +0000 (0:00:01.015) 0:01:55.097 ******* 2026-02-13 05:41:33.520820 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.520839 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.520857 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.520875 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.520894 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.520912 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.520930 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.520950 | orchestrator | 2026-02-13 05:41:33.520969 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-13 05:41:33.520991 | orchestrator | Friday 13 February 2026 05:41:26 +0000 (0:00:00.783) 0:01:55.881 ******* 2026-02-13 05:41:33.521012 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521030 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521048 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521059 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521070 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521080 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521091 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521102 | orchestrator | 2026-02-13 05:41:33.521113 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-13 05:41:33.521124 | orchestrator | Friday 13 February 2026 05:41:27 +0000 (0:00:00.997) 0:01:56.879 ******* 2026-02-13 05:41:33.521134 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521145 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521156 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521167 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521177 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521188 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521199 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521210 | orchestrator | 2026-02-13 05:41:33.521221 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-13 05:41:33.521232 | orchestrator | Friday 13 February 2026 05:41:27 +0000 (0:00:00.725) 0:01:57.604 ******* 2026-02-13 05:41:33.521243 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521254 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521265 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521276 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521286 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521297 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521308 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521318 | orchestrator | 2026-02-13 05:41:33.521329 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-13 05:41:33.521340 | orchestrator | Friday 13 February 2026 05:41:28 +0000 (0:00:01.000) 0:01:58.605 ******* 2026-02-13 05:41:33.521351 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521377 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521388 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521399 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521410 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521420 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521431 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521442 | orchestrator | 2026-02-13 05:41:33.521453 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-13 05:41:33.521464 | orchestrator | Friday 13 February 2026 05:41:29 +0000 (0:00:01.035) 0:01:59.641 ******* 2026-02-13 05:41:33.521474 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521485 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521496 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521507 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521517 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521573 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521584 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521595 | orchestrator | 2026-02-13 05:41:33.521629 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-13 05:41:33.521641 | orchestrator | Friday 13 February 2026 05:41:30 +0000 (0:00:00.771) 0:02:00.412 ******* 2026-02-13 05:41:33.521651 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:33.521662 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:33.521673 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:33.521683 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:33.521695 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:41:33.521706 | orchestrator | 2026-02-13 05:41:33.521726 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-13 05:41:33.521737 | orchestrator | Friday 13 February 2026 05:41:32 +0000 (0:00:01.581) 0:02:01.994 ******* 2026-02-13 05:41:33.521748 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:41:33.521759 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:41:33.521770 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:41:33.521780 | orchestrator | 2026-02-13 05:41:33.521791 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-13 05:41:33.521802 | orchestrator | Friday 13 February 2026 05:41:32 +0000 (0:00:00.374) 0:02:02.368 ******* 2026-02-13 05:41:33.521813 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 05:41:33.521824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 05:41:33.521835 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521846 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 05:41:33.521857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 05:41:33.521868 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.521879 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 05:41:33.521890 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 05:41:33.521900 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:33.521911 | orchestrator | 2026-02-13 05:41:33.521922 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-13 05:41:33.521933 | orchestrator | Friday 13 February 2026 05:41:32 +0000 (0:00:00.384) 0:02:02.752 ******* 2026-02-13 05:41:33.521953 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:33.521967 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:33.521978 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:33.521989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:33.522000 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:33.522011 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:33.522085 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:33.522106 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:36.647200 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:36.647287 | orchestrator | 2026-02-13 05:41:36.647299 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-13 05:41:36.647309 | orchestrator | Friday 13 February 2026 05:41:33 +0000 (0:00:00.626) 0:02:03.378 ******* 2026-02-13 05:41:36.647318 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:36.647326 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:36.647351 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:36.647369 | orchestrator | 2026-02-13 05:41:36.647392 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-13 05:41:36.647401 | orchestrator | Friday 13 February 2026 05:41:33 +0000 (0:00:00.355) 0:02:03.734 ******* 2026-02-13 05:41:36.647408 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:36.647416 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:36.647424 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:36.647432 | orchestrator | 2026-02-13 05:41:36.647440 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-13 05:41:36.647447 | orchestrator | Friday 13 February 2026 05:41:34 +0000 (0:00:00.357) 0:02:04.092 ******* 2026-02-13 05:41:36.647456 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:36.647464 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:36.647472 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:36.647479 | orchestrator | 2026-02-13 05:41:36.647487 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-13 05:41:36.647495 | orchestrator | Friday 13 February 2026 05:41:34 +0000 (0:00:00.303) 0:02:04.395 ******* 2026-02-13 05:41:36.647503 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:36.647510 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:36.647566 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:36.647576 | orchestrator | 2026-02-13 05:41:36.647584 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-13 05:41:36.647592 | orchestrator | Friday 13 February 2026 05:41:34 +0000 (0:00:00.320) 0:02:04.715 ******* 2026-02-13 05:41:36.647600 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 05:41:36.647610 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 05:41:36.647618 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 05:41:36.647626 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 05:41:36.647634 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 05:41:36.647642 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 05:41:36.647650 | orchestrator | 2026-02-13 05:41:36.647658 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-13 05:41:36.647666 | orchestrator | Friday 13 February 2026 05:41:36 +0000 (0:00:01.392) 0:02:06.107 ******* 2026-02-13 05:41:36.647680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f/osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770953972.7269742, 'mtime': 1770953972.7209742, 'ctime': 1770953972.7209742, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f/osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:36.647714 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7c5ad083-16ef-5861-9238-a28b124c66ab/osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770953991.2062955, 'mtime': 1770953991.2022955, 'ctime': 1770953991.2022955, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7c5ad083-16ef-5861-9238-a28b124c66ab/osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:36.647732 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:36.647740 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6/osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770953968.3952706, 'mtime': 1770953968.3882704, 'ctime': 1770953968.3882704, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6/osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:36.647750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f/osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770953987.1075842, 'mtime': 1770953987.1015842, 'ctime': 1770953987.1015842, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f/osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:36.647758 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:36.647778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8151fb69-3858-5887-af01-e0d44d84b3e6/osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1770953970.9330325, 'mtime': 1770953970.9280324, 'ctime': 1770953970.9280324, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8151fb69-3858-5887-af01-e0d44d84b3e6/osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1/osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1770953989.4863594, 'mtime': 1770953989.4793594, 'ctime': 1770953989.4793594, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1/osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340417 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:38.340433 | orchestrator | 2026-02-13 05:41:38.340449 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-13 05:41:38.340465 | orchestrator | Friday 13 February 2026 05:41:36 +0000 (0:00:00.401) 0:02:06.509 ******* 2026-02-13 05:41:38.340479 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 05:41:38.340494 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 05:41:38.340507 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:38.340520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 05:41:38.340586 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 05:41:38.340601 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:38.340614 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 05:41:38.340628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 05:41:38.340641 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:38.340654 | orchestrator | 2026-02-13 05:41:38.340668 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-13 05:41:38.340683 | orchestrator | Friday 13 February 2026 05:41:36 +0000 (0:00:00.357) 0:02:06.866 ******* 2026-02-13 05:41:38.340696 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340707 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340739 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:38.340761 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340795 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:38.340803 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340820 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:38.340828 | orchestrator | 2026-02-13 05:41:38.340836 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-13 05:41:38.340844 | orchestrator | Friday 13 February 2026 05:41:37 +0000 (0:00:00.352) 0:02:07.219 ******* 2026-02-13 05:41:38.340851 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 05:41:38.340859 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 05:41:38.340868 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:38.340877 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 05:41:38.340886 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 05:41:38.340895 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:38.340904 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 05:41:38.340913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 05:41:38.340923 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:38.340932 | orchestrator | 2026-02-13 05:41:38.340941 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-13 05:41:38.340950 | orchestrator | Friday 13 February 2026 05:41:37 +0000 (0:00:00.576) 0:02:07.796 ******* 2026-02-13 05:41:38.340959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340975 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.340985 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:38.340994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.341003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.341023 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:38.341033 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:38.341048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 05:41:42.828934 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:42.829038 | orchestrator | 2026-02-13 05:41:42.829054 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-13 05:41:42.829065 | orchestrator | Friday 13 February 2026 05:41:38 +0000 (0:00:00.402) 0:02:08.198 ******* 2026-02-13 05:41:42.829075 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:42.829085 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:42.829095 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:42.829105 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:42.829115 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:42.829124 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:42.829134 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:42.829144 | orchestrator | 2026-02-13 05:41:42.829153 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-13 05:41:42.829163 | orchestrator | Friday 13 February 2026 05:41:39 +0000 (0:00:00.753) 0:02:08.952 ******* 2026-02-13 05:41:42.829173 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:42.829182 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:42.829191 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:42.829201 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:42.829211 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 05:41:42.829221 | orchestrator | 2026-02-13 05:41:42.829231 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-13 05:41:42.829240 | orchestrator | Friday 13 February 2026 05:41:40 +0000 (0:00:01.695) 0:02:10.648 ******* 2026-02-13 05:41:42.829251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829313 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829323 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:42.829332 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829351 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829379 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:42.829389 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829436 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:42.829446 | orchestrator | 2026-02-13 05:41:42.829456 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-13 05:41:42.829465 | orchestrator | Friday 13 February 2026 05:41:41 +0000 (0:00:00.462) 0:02:11.111 ******* 2026-02-13 05:41:42.829488 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829500 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829591 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829602 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:42.829614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829625 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829677 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:42.829688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829710 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829721 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829743 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:42.829754 | orchestrator | 2026-02-13 05:41:42.829766 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-13 05:41:42.829777 | orchestrator | Friday 13 February 2026 05:41:41 +0000 (0:00:00.757) 0:02:11.868 ******* 2026-02-13 05:41:42.829788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829821 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829843 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:42.829853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829862 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829900 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:42.829909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 05:41:42.829962 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:42.829978 | orchestrator | 2026-02-13 05:41:42.829987 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-13 05:41:42.829997 | orchestrator | Friday 13 February 2026 05:41:42 +0000 (0:00:00.420) 0:02:12.289 ******* 2026-02-13 05:41:42.830007 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:42.830061 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:42.830079 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.775296 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.775414 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.775431 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.775443 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.775454 | orchestrator | 2026-02-13 05:41:49.775467 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-13 05:41:49.775479 | orchestrator | Friday 13 February 2026 05:41:43 +0000 (0:00:00.776) 0:02:13.066 ******* 2026-02-13 05:41:49.775491 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.775502 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.775513 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.775580 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.775593 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.775603 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.775614 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.775624 | orchestrator | 2026-02-13 05:41:49.775635 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-13 05:41:49.775646 | orchestrator | Friday 13 February 2026 05:41:44 +0000 (0:00:01.099) 0:02:14.165 ******* 2026-02-13 05:41:49.775656 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.775666 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.775675 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.775684 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.775693 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.775703 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.775712 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.775722 | orchestrator | 2026-02-13 05:41:49.775731 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-13 05:41:49.775741 | orchestrator | Friday 13 February 2026 05:41:45 +0000 (0:00:00.776) 0:02:14.942 ******* 2026-02-13 05:41:49.775751 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.775760 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.775770 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.775780 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.775790 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.775799 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.775810 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.775819 | orchestrator | 2026-02-13 05:41:49.775830 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-13 05:41:49.775842 | orchestrator | Friday 13 February 2026 05:41:46 +0000 (0:00:01.021) 0:02:15.964 ******* 2026-02-13 05:41:49.775853 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.775863 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.775873 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.775883 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.775893 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.775903 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.775912 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.775923 | orchestrator | 2026-02-13 05:41:49.775935 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-13 05:41:49.775946 | orchestrator | Friday 13 February 2026 05:41:47 +0000 (0:00:00.982) 0:02:16.946 ******* 2026-02-13 05:41:49.775957 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.775968 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.775979 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.776018 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.776030 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.776048 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.776059 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.776069 | orchestrator | 2026-02-13 05:41:49.776081 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-13 05:41:49.776092 | orchestrator | Friday 13 February 2026 05:41:47 +0000 (0:00:00.817) 0:02:17.763 ******* 2026-02-13 05:41:49.776102 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.776113 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.776125 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.776136 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:49.776147 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:49.776158 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:49.776170 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:49.776181 | orchestrator | 2026-02-13 05:41:49.776193 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-13 05:41:49.776204 | orchestrator | Friday 13 February 2026 05:41:49 +0000 (0:00:01.119) 0:02:18.883 ******* 2026-02-13 05:41:49.776217 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:49.776230 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:49.776259 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:49.776273 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:49.776285 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:49.776299 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:49.776310 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:49.776343 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:49.776355 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:49.776366 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:49.776377 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:49.776388 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:49.776400 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:49.776412 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:49.776422 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:49.776460 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:49.776482 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:49.776492 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:49.776503 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:49.776514 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:49.776544 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:49.776555 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:49.776565 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:49.776576 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:49.776586 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:49.776596 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:49.776606 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:49.776624 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:49.776634 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:49.776643 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:49.776662 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035449 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035582 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:52.035596 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035606 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035635 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035642 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:52.035650 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.035658 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035664 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035671 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.035678 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035682 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035686 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035689 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:52.035694 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035701 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035707 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035713 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:52.035719 | orchestrator | 2026-02-13 05:41:52.035727 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-13 05:41:52.035735 | orchestrator | Friday 13 February 2026 05:41:50 +0000 (0:00:01.027) 0:02:19.911 ******* 2026-02-13 05:41:52.035741 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:52.035748 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:52.035754 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:52.035761 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:41:52.035767 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:41:52.035773 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:41:52.035780 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:41:52.035786 | orchestrator | 2026-02-13 05:41:52.035793 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-13 05:41:52.035813 | orchestrator | Friday 13 February 2026 05:41:51 +0000 (0:00:01.140) 0:02:21.051 ******* 2026-02-13 05:41:52.035819 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035826 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035831 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.035841 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035861 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035869 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035875 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:41:52.035882 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035888 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035895 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.035901 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035908 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035914 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035921 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:41:52.035928 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035934 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035940 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.035947 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.035953 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:41:52.035960 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:41:52.035966 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:41:52.035973 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.035979 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.035986 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.036002 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:41:52.036009 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:41:52.036015 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:41:52.036022 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:41:52.036034 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:42:07.955746 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:42:07.955892 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:42:07.955918 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:42:07.955938 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:42:07.955960 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:42:07.955979 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:42:07.956000 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:42:07.956018 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:42:07.956038 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.956057 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.956075 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:42:07.956094 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 05:42:07.956112 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 05:42:07.956134 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 05:42:07.956154 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 05:42:07.956176 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:42:07.956235 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 05:42:07.956259 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.956282 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 05:42:07.956303 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.956325 | orchestrator | 2026-02-13 05:42:07.956350 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-13 05:42:07.956393 | orchestrator | Friday 13 February 2026 05:41:52 +0000 (0:00:01.034) 0:02:22.086 ******* 2026-02-13 05:42:07.956415 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.956438 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.956463 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.956484 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.956504 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.956555 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.956576 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.956596 | orchestrator | 2026-02-13 05:42:07.956617 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-13 05:42:07.956638 | orchestrator | Friday 13 February 2026 05:41:53 +0000 (0:00:01.121) 0:02:23.207 ******* 2026-02-13 05:42:07.956657 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.956677 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.956698 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.956718 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.956739 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.956758 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.956779 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.956799 | orchestrator | 2026-02-13 05:42:07.956819 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-13 05:42:07.956866 | orchestrator | Friday 13 February 2026 05:41:54 +0000 (0:00:00.752) 0:02:23.960 ******* 2026-02-13 05:42:07.956888 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.956906 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.956925 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.956943 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.956961 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.956979 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.956997 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.957015 | orchestrator | 2026-02-13 05:42:07.957034 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-13 05:42:07.957052 | orchestrator | Friday 13 February 2026 05:41:56 +0000 (0:00:01.915) 0:02:25.876 ******* 2026-02-13 05:42:07.957071 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 05:42:07.957093 | orchestrator | 2026-02-13 05:42:07.957111 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-13 05:42:07.957129 | orchestrator | Friday 13 February 2026 05:41:58 +0000 (0:00:02.032) 0:02:27.908 ******* 2026-02-13 05:42:07.957147 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957167 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957187 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957207 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957246 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957266 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957283 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 05:42:07.957300 | orchestrator | 2026-02-13 05:42:07.957318 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-13 05:42:07.957337 | orchestrator | Friday 13 February 2026 05:41:58 +0000 (0:00:00.947) 0:02:28.855 ******* 2026-02-13 05:42:07.957355 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.957376 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.957388 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.957399 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.957410 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.957421 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.957432 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.957442 | orchestrator | 2026-02-13 05:42:07.957453 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-13 05:42:07.957464 | orchestrator | Friday 13 February 2026 05:42:00 +0000 (0:00:01.084) 0:02:29.940 ******* 2026-02-13 05:42:07.957475 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.957485 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.957496 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.957507 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.957567 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.957588 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.957607 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.957626 | orchestrator | 2026-02-13 05:42:07.957638 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-13 05:42:07.957648 | orchestrator | Friday 13 February 2026 05:42:00 +0000 (0:00:00.804) 0:02:30.744 ******* 2026-02-13 05:42:07.957659 | orchestrator | ok: [testbed-node-1] 2026-02-13 05:42:07.957671 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:07.957681 | orchestrator | ok: [testbed-node-2] 2026-02-13 05:42:07.957692 | orchestrator | ok: [testbed-node-3] 2026-02-13 05:42:07.957703 | orchestrator | ok: [testbed-node-4] 2026-02-13 05:42:07.957713 | orchestrator | ok: [testbed-node-5] 2026-02-13 05:42:07.957724 | orchestrator | ok: [testbed-manager] 2026-02-13 05:42:07.957735 | orchestrator | 2026-02-13 05:42:07.957746 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-13 05:42:07.957756 | orchestrator | Friday 13 February 2026 05:42:02 +0000 (0:00:01.502) 0:02:32.246 ******* 2026-02-13 05:42:07.957768 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.957778 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.957854 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.957866 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.957877 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.957887 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.957898 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.957909 | orchestrator | 2026-02-13 05:42:07.957930 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-13 05:42:07.957941 | orchestrator | Friday 13 February 2026 05:42:03 +0000 (0:00:01.541) 0:02:33.787 ******* 2026-02-13 05:42:07.957952 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.957963 | orchestrator | skipping: [testbed-node-1] 2026-02-13 05:42:07.957974 | orchestrator | skipping: [testbed-node-2] 2026-02-13 05:42:07.957984 | orchestrator | skipping: [testbed-node-3] 2026-02-13 05:42:07.957995 | orchestrator | skipping: [testbed-node-4] 2026-02-13 05:42:07.958006 | orchestrator | skipping: [testbed-node-5] 2026-02-13 05:42:07.958082 | orchestrator | skipping: [testbed-manager] 2026-02-13 05:42:07.958106 | orchestrator | 2026-02-13 05:42:07.958123 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-13 05:42:07.958159 | orchestrator | Friday 13 February 2026 05:42:05 +0000 (0:00:01.550) 0:02:35.338 ******* 2026-02-13 05:42:07.958179 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:07.958192 | orchestrator | 2026-02-13 05:42:07.958203 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-13 05:42:07.958214 | orchestrator | Friday 13 February 2026 05:42:07 +0000 (0:00:01.759) 0:02:37.097 ******* 2026-02-13 05:42:07.958224 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:07.958235 | orchestrator | 2026-02-13 05:42:07.958264 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-13 05:42:26.348804 | orchestrator | 2026-02-13 05:42:26.348939 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 05:42:26.348957 | orchestrator | Friday 13 February 2026 05:42:07 +0000 (0:00:00.711) 0:02:37.809 ******* 2026-02-13 05:42:26.348969 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.348982 | orchestrator | 2026-02-13 05:42:26.348993 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 05:42:26.349004 | orchestrator | Friday 13 February 2026 05:42:08 +0000 (0:00:00.528) 0:02:38.338 ******* 2026-02-13 05:42:26.349016 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349027 | orchestrator | 2026-02-13 05:42:26.349038 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-13 05:42:26.349049 | orchestrator | Friday 13 February 2026 05:42:09 +0000 (0:00:00.568) 0:02:38.907 ******* 2026-02-13 05:42:26.349062 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-13 05:42:26.349076 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-13 05:42:26.349088 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-13 05:42:26.349099 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-13 05:42:26.349111 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-13 05:42:26.349125 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}])  2026-02-13 05:42:26.349138 | orchestrator | 2026-02-13 05:42:26.349149 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-13 05:42:26.349183 | orchestrator | 2026-02-13 05:42:26.349195 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-13 05:42:26.349205 | orchestrator | Friday 13 February 2026 05:42:18 +0000 (0:00:09.952) 0:02:48.859 ******* 2026-02-13 05:42:26.349216 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349227 | orchestrator | 2026-02-13 05:42:26.349252 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-13 05:42:26.349263 | orchestrator | Friday 13 February 2026 05:42:19 +0000 (0:00:00.476) 0:02:49.335 ******* 2026-02-13 05:42:26.349275 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349286 | orchestrator | 2026-02-13 05:42:26.349296 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-13 05:42:26.349307 | orchestrator | Friday 13 February 2026 05:42:19 +0000 (0:00:00.146) 0:02:49.481 ******* 2026-02-13 05:42:26.349318 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:26.349330 | orchestrator | 2026-02-13 05:42:26.349341 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-13 05:42:26.349351 | orchestrator | Friday 13 February 2026 05:42:19 +0000 (0:00:00.128) 0:02:49.610 ******* 2026-02-13 05:42:26.349362 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349373 | orchestrator | 2026-02-13 05:42:26.349384 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 05:42:26.349394 | orchestrator | Friday 13 February 2026 05:42:19 +0000 (0:00:00.139) 0:02:49.749 ******* 2026-02-13 05:42:26.349405 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-13 05:42:26.349416 | orchestrator | 2026-02-13 05:42:26.349427 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 05:42:26.349455 | orchestrator | Friday 13 February 2026 05:42:20 +0000 (0:00:00.270) 0:02:50.020 ******* 2026-02-13 05:42:26.349466 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349477 | orchestrator | 2026-02-13 05:42:26.349488 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 05:42:26.349499 | orchestrator | Friday 13 February 2026 05:42:20 +0000 (0:00:00.481) 0:02:50.501 ******* 2026-02-13 05:42:26.349541 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349554 | orchestrator | 2026-02-13 05:42:26.349565 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 05:42:26.349576 | orchestrator | Friday 13 February 2026 05:42:20 +0000 (0:00:00.135) 0:02:50.637 ******* 2026-02-13 05:42:26.349587 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349597 | orchestrator | 2026-02-13 05:42:26.349608 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 05:42:26.349619 | orchestrator | Friday 13 February 2026 05:42:21 +0000 (0:00:00.479) 0:02:51.116 ******* 2026-02-13 05:42:26.349630 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349641 | orchestrator | 2026-02-13 05:42:26.349651 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 05:42:26.349662 | orchestrator | Friday 13 February 2026 05:42:21 +0000 (0:00:00.348) 0:02:51.465 ******* 2026-02-13 05:42:26.349673 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349684 | orchestrator | 2026-02-13 05:42:26.349695 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 05:42:26.349705 | orchestrator | Friday 13 February 2026 05:42:21 +0000 (0:00:00.140) 0:02:51.606 ******* 2026-02-13 05:42:26.349716 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349727 | orchestrator | 2026-02-13 05:42:26.349737 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 05:42:26.349749 | orchestrator | Friday 13 February 2026 05:42:21 +0000 (0:00:00.150) 0:02:51.756 ******* 2026-02-13 05:42:26.349759 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:26.349770 | orchestrator | 2026-02-13 05:42:26.349781 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 05:42:26.349791 | orchestrator | Friday 13 February 2026 05:42:22 +0000 (0:00:00.146) 0:02:51.903 ******* 2026-02-13 05:42:26.349811 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349822 | orchestrator | 2026-02-13 05:42:26.349833 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 05:42:26.349844 | orchestrator | Friday 13 February 2026 05:42:22 +0000 (0:00:00.134) 0:02:52.037 ******* 2026-02-13 05:42:26.349854 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:42:26.349865 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:42:26.349876 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:42:26.349886 | orchestrator | 2026-02-13 05:42:26.349897 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 05:42:26.349908 | orchestrator | Friday 13 February 2026 05:42:22 +0000 (0:00:00.643) 0:02:52.681 ******* 2026-02-13 05:42:26.349918 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:26.349929 | orchestrator | 2026-02-13 05:42:26.349940 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 05:42:26.349951 | orchestrator | Friday 13 February 2026 05:42:23 +0000 (0:00:00.257) 0:02:52.938 ******* 2026-02-13 05:42:26.349961 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:42:26.349972 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:42:26.349983 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:42:26.349993 | orchestrator | 2026-02-13 05:42:26.350004 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 05:42:26.350015 | orchestrator | Friday 13 February 2026 05:42:24 +0000 (0:00:01.864) 0:02:54.803 ******* 2026-02-13 05:42:26.350086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:42:26.350098 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:42:26.350108 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:42:26.350119 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:26.350130 | orchestrator | 2026-02-13 05:42:26.350141 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 05:42:26.350152 | orchestrator | Friday 13 February 2026 05:42:25 +0000 (0:00:00.394) 0:02:55.197 ******* 2026-02-13 05:42:26.350171 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 05:42:26.350191 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 05:42:26.350209 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 05:42:26.350226 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:26.350242 | orchestrator | 2026-02-13 05:42:26.350259 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 05:42:26.350276 | orchestrator | Friday 13 February 2026 05:42:26 +0000 (0:00:00.841) 0:02:56.039 ******* 2026-02-13 05:42:26.350306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.037640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.037815 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.037845 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.037864 | orchestrator | 2026-02-13 05:42:31.037883 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 05:42:31.037901 | orchestrator | Friday 13 February 2026 05:42:26 +0000 (0:00:00.168) 0:02:56.208 ******* 2026-02-13 05:42:31.037920 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9a39aafafb69', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 05:42:23.618727', 'end': '2026-02-13 05:42:23.667942', 'delta': '0:00:00.049215', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9a39aafafb69'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-13 05:42:31.037941 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8f8955ec790', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 05:42:24.165545', 'end': '2026-02-13 05:42:24.214200', 'delta': '0:00:00.048655', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8f8955ec790'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-13 05:42:31.037981 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '30f78d02966b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 05:42:24.729803', 'end': '2026-02-13 05:42:24.788568', 'delta': '0:00:00.058765', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['30f78d02966b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-13 05:42:31.037998 | orchestrator | 2026-02-13 05:42:31.038015 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 05:42:31.038110 | orchestrator | Friday 13 February 2026 05:42:26 +0000 (0:00:00.178) 0:02:56.386 ******* 2026-02-13 05:42:31.038130 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:31.038149 | orchestrator | 2026-02-13 05:42:31.038167 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 05:42:31.038185 | orchestrator | Friday 13 February 2026 05:42:26 +0000 (0:00:00.256) 0:02:56.643 ******* 2026-02-13 05:42:31.038204 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038222 | orchestrator | 2026-02-13 05:42:31.038256 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 05:42:31.038275 | orchestrator | Friday 13 February 2026 05:42:27 +0000 (0:00:00.832) 0:02:57.476 ******* 2026-02-13 05:42:31.038293 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:31.038309 | orchestrator | 2026-02-13 05:42:31.038323 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 05:42:31.038333 | orchestrator | Friday 13 February 2026 05:42:27 +0000 (0:00:00.147) 0:02:57.624 ******* 2026-02-13 05:42:31.038389 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 05:42:31.038400 | orchestrator | 2026-02-13 05:42:31.038410 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 05:42:31.038419 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:01.359) 0:02:58.983 ******* 2026-02-13 05:42:31.038429 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:31.038439 | orchestrator | 2026-02-13 05:42:31.038448 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 05:42:31.038458 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:00.179) 0:02:59.163 ******* 2026-02-13 05:42:31.038468 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038477 | orchestrator | 2026-02-13 05:42:31.038487 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 05:42:31.038496 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:00.118) 0:02:59.281 ******* 2026-02-13 05:42:31.038533 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038545 | orchestrator | 2026-02-13 05:42:31.038555 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 05:42:31.038564 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:00.232) 0:02:59.514 ******* 2026-02-13 05:42:31.038574 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038584 | orchestrator | 2026-02-13 05:42:31.038594 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 05:42:31.038604 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:00.130) 0:02:59.644 ******* 2026-02-13 05:42:31.038613 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038623 | orchestrator | 2026-02-13 05:42:31.038633 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 05:42:31.038643 | orchestrator | Friday 13 February 2026 05:42:29 +0000 (0:00:00.131) 0:02:59.775 ******* 2026-02-13 05:42:31.038652 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038662 | orchestrator | 2026-02-13 05:42:31.038672 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 05:42:31.038681 | orchestrator | Friday 13 February 2026 05:42:30 +0000 (0:00:00.135) 0:02:59.910 ******* 2026-02-13 05:42:31.038691 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038701 | orchestrator | 2026-02-13 05:42:31.038710 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 05:42:31.038720 | orchestrator | Friday 13 February 2026 05:42:30 +0000 (0:00:00.132) 0:03:00.043 ******* 2026-02-13 05:42:31.038730 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038739 | orchestrator | 2026-02-13 05:42:31.038749 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 05:42:31.038758 | orchestrator | Friday 13 February 2026 05:42:30 +0000 (0:00:00.138) 0:03:00.182 ******* 2026-02-13 05:42:31.038768 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038778 | orchestrator | 2026-02-13 05:42:31.038788 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 05:42:31.038798 | orchestrator | Friday 13 February 2026 05:42:30 +0000 (0:00:00.124) 0:03:00.306 ******* 2026-02-13 05:42:31.038808 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.038818 | orchestrator | 2026-02-13 05:42:31.038827 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 05:42:31.038837 | orchestrator | Friday 13 February 2026 05:42:30 +0000 (0:00:00.125) 0:03:00.431 ******* 2026-02-13 05:42:31.038848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.038885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.038896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.038908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 05:42:31.038928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.274066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.274143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.274168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 05:42:31.274192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.274199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 05:42:31.274205 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:31.274212 | orchestrator | 2026-02-13 05:42:31.274219 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 05:42:31.274226 | orchestrator | Friday 13 February 2026 05:42:31 +0000 (0:00:00.469) 0:03:00.901 ******* 2026-02-13 05:42:31.274247 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274255 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274261 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274275 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274282 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274287 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:31.274298 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:39.875493 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:39.875828 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:39.875896 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 05:42:39.875914 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.875930 | orchestrator | 2026-02-13 05:42:39.875943 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 05:42:39.875959 | orchestrator | Friday 13 February 2026 05:42:31 +0000 (0:00:00.228) 0:03:01.129 ******* 2026-02-13 05:42:39.875972 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:39.875987 | orchestrator | 2026-02-13 05:42:39.876001 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 05:42:39.876014 | orchestrator | Friday 13 February 2026 05:42:31 +0000 (0:00:00.514) 0:03:01.644 ******* 2026-02-13 05:42:39.876027 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:39.876040 | orchestrator | 2026-02-13 05:42:39.876055 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 05:42:39.876091 | orchestrator | Friday 13 February 2026 05:42:31 +0000 (0:00:00.137) 0:03:01.781 ******* 2026-02-13 05:42:39.876105 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:42:39.876119 | orchestrator | 2026-02-13 05:42:39.876129 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 05:42:39.876137 | orchestrator | Friday 13 February 2026 05:42:32 +0000 (0:00:00.465) 0:03:02.247 ******* 2026-02-13 05:42:39.876145 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876154 | orchestrator | 2026-02-13 05:42:39.876162 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 05:42:39.876182 | orchestrator | Friday 13 February 2026 05:42:32 +0000 (0:00:00.139) 0:03:02.386 ******* 2026-02-13 05:42:39.876190 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876198 | orchestrator | 2026-02-13 05:42:39.876206 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 05:42:39.876214 | orchestrator | Friday 13 February 2026 05:42:32 +0000 (0:00:00.249) 0:03:02.636 ******* 2026-02-13 05:42:39.876222 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876230 | orchestrator | 2026-02-13 05:42:39.876238 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 05:42:39.876246 | orchestrator | Friday 13 February 2026 05:42:32 +0000 (0:00:00.152) 0:03:02.788 ******* 2026-02-13 05:42:39.876254 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:42:39.876263 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 05:42:39.876270 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 05:42:39.876278 | orchestrator | 2026-02-13 05:42:39.876286 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 05:42:39.876294 | orchestrator | Friday 13 February 2026 05:42:33 +0000 (0:00:00.906) 0:03:03.694 ******* 2026-02-13 05:42:39.876302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:42:39.876310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:42:39.876332 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:42:39.876340 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876348 | orchestrator | 2026-02-13 05:42:39.876356 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 05:42:39.876364 | orchestrator | Friday 13 February 2026 05:42:33 +0000 (0:00:00.157) 0:03:03.852 ******* 2026-02-13 05:42:39.876372 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876380 | orchestrator | 2026-02-13 05:42:39.876388 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 05:42:39.876396 | orchestrator | Friday 13 February 2026 05:42:34 +0000 (0:00:00.137) 0:03:03.990 ******* 2026-02-13 05:42:39.876404 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:42:39.876412 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:42:39.876421 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:42:39.876429 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 05:42:39.876437 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 05:42:39.876445 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 05:42:39.876453 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 05:42:39.876461 | orchestrator | 2026-02-13 05:42:39.876469 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 05:42:39.876477 | orchestrator | Friday 13 February 2026 05:42:35 +0000 (0:00:01.035) 0:03:05.026 ******* 2026-02-13 05:42:39.876485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:42:39.876493 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:42:39.876501 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:42:39.876546 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 05:42:39.876555 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 05:42:39.876563 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 05:42:39.876570 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 05:42:39.876584 | orchestrator | 2026-02-13 05:42:39.876592 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-13 05:42:39.876600 | orchestrator | Friday 13 February 2026 05:42:36 +0000 (0:00:01.796) 0:03:06.822 ******* 2026-02-13 05:42:39.876608 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 05:42:39.876616 | orchestrator | 2026-02-13 05:42:39.876623 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-13 05:42:39.876631 | orchestrator | Friday 13 February 2026 05:42:38 +0000 (0:00:01.209) 0:03:08.032 ******* 2026-02-13 05:42:39.876639 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876647 | orchestrator | 2026-02-13 05:42:39.876655 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-13 05:42:39.876663 | orchestrator | Friday 13 February 2026 05:42:38 +0000 (0:00:00.245) 0:03:08.278 ******* 2026-02-13 05:42:39.876671 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:42:39.876679 | orchestrator | 2026-02-13 05:42:39.876686 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-13 05:42:39.876694 | orchestrator | Friday 13 February 2026 05:42:38 +0000 (0:00:00.137) 0:03:08.416 ******* 2026-02-13 05:42:39.876702 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 05:42:39.876710 | orchestrator | 2026-02-13 05:42:39.876718 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-13 05:42:39.876732 | orchestrator | Friday 13 February 2026 05:42:39 +0000 (0:00:01.321) 0:03:09.737 ******* 2026-02-13 05:43:05.728779 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.728892 | orchestrator | 2026-02-13 05:43:05.728908 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-13 05:43:05.728920 | orchestrator | Friday 13 February 2026 05:42:40 +0000 (0:00:00.139) 0:03:09.877 ******* 2026-02-13 05:43:05.728931 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:43:05.728941 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 05:43:05.728952 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 05:43:05.728961 | orchestrator | 2026-02-13 05:43:05.728972 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-13 05:43:05.728981 | orchestrator | Friday 13 February 2026 05:42:41 +0000 (0:00:01.448) 0:03:11.326 ******* 2026-02-13 05:43:05.728992 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-13 05:43:05.729065 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-13 05:43:05.729093 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-13 05:43:05.729110 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-13 05:43:05.729126 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-13 05:43:05.729142 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-13 05:43:05.729157 | orchestrator | 2026-02-13 05:43:05.729173 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-13 05:43:05.729189 | orchestrator | Friday 13 February 2026 05:42:53 +0000 (0:00:12.394) 0:03:23.721 ******* 2026-02-13 05:43:05.729205 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:43:05.729223 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:43:05.729240 | orchestrator | 2026-02-13 05:43:05.729255 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-13 05:43:05.729272 | orchestrator | Friday 13 February 2026 05:42:56 +0000 (0:00:02.806) 0:03:26.528 ******* 2026-02-13 05:43:05.729288 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:43:05.729303 | orchestrator | 2026-02-13 05:43:05.729321 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 05:43:05.729364 | orchestrator | Friday 13 February 2026 05:42:58 +0000 (0:00:01.472) 0:03:28.000 ******* 2026-02-13 05:43:05.729377 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-13 05:43:05.729389 | orchestrator | 2026-02-13 05:43:05.729401 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 05:43:05.729413 | orchestrator | Friday 13 February 2026 05:42:58 +0000 (0:00:00.579) 0:03:28.580 ******* 2026-02-13 05:43:05.729432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-13 05:43:05.729443 | orchestrator | 2026-02-13 05:43:05.729455 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 05:43:05.729466 | orchestrator | Friday 13 February 2026 05:42:59 +0000 (0:00:00.816) 0:03:29.397 ******* 2026-02-13 05:43:05.729477 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.729489 | orchestrator | 2026-02-13 05:43:05.729533 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 05:43:05.729545 | orchestrator | Friday 13 February 2026 05:43:00 +0000 (0:00:00.524) 0:03:29.922 ******* 2026-02-13 05:43:05.729557 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729568 | orchestrator | 2026-02-13 05:43:05.729579 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 05:43:05.729591 | orchestrator | Friday 13 February 2026 05:43:00 +0000 (0:00:00.176) 0:03:30.098 ******* 2026-02-13 05:43:05.729602 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729614 | orchestrator | 2026-02-13 05:43:05.729625 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 05:43:05.729636 | orchestrator | Friday 13 February 2026 05:43:00 +0000 (0:00:00.131) 0:03:30.230 ******* 2026-02-13 05:43:05.729648 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729659 | orchestrator | 2026-02-13 05:43:05.729670 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 05:43:05.729682 | orchestrator | Friday 13 February 2026 05:43:00 +0000 (0:00:00.151) 0:03:30.381 ******* 2026-02-13 05:43:05.729694 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.729705 | orchestrator | 2026-02-13 05:43:05.729716 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 05:43:05.729726 | orchestrator | Friday 13 February 2026 05:43:01 +0000 (0:00:00.593) 0:03:30.975 ******* 2026-02-13 05:43:05.729735 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729745 | orchestrator | 2026-02-13 05:43:05.729754 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 05:43:05.729764 | orchestrator | Friday 13 February 2026 05:43:01 +0000 (0:00:00.133) 0:03:31.109 ******* 2026-02-13 05:43:05.729774 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729783 | orchestrator | 2026-02-13 05:43:05.729793 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 05:43:05.729803 | orchestrator | Friday 13 February 2026 05:43:01 +0000 (0:00:00.128) 0:03:31.238 ******* 2026-02-13 05:43:05.729812 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.729822 | orchestrator | 2026-02-13 05:43:05.729831 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 05:43:05.729841 | orchestrator | Friday 13 February 2026 05:43:01 +0000 (0:00:00.583) 0:03:31.822 ******* 2026-02-13 05:43:05.729850 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.729860 | orchestrator | 2026-02-13 05:43:05.729889 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 05:43:05.729900 | orchestrator | Friday 13 February 2026 05:43:02 +0000 (0:00:00.551) 0:03:32.373 ******* 2026-02-13 05:43:05.729910 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.729919 | orchestrator | 2026-02-13 05:43:05.729929 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 05:43:05.729939 | orchestrator | Friday 13 February 2026 05:43:02 +0000 (0:00:00.127) 0:03:32.501 ******* 2026-02-13 05:43:05.729948 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.729966 | orchestrator | 2026-02-13 05:43:05.729976 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 05:43:05.729986 | orchestrator | Friday 13 February 2026 05:43:02 +0000 (0:00:00.137) 0:03:32.638 ******* 2026-02-13 05:43:05.730002 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730125 | orchestrator | 2026-02-13 05:43:05.730139 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 05:43:05.730149 | orchestrator | Friday 13 February 2026 05:43:02 +0000 (0:00:00.165) 0:03:32.804 ******* 2026-02-13 05:43:05.730159 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730169 | orchestrator | 2026-02-13 05:43:05.730178 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 05:43:05.730194 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.122) 0:03:32.927 ******* 2026-02-13 05:43:05.730211 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730227 | orchestrator | 2026-02-13 05:43:05.730243 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 05:43:05.730259 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.354) 0:03:33.281 ******* 2026-02-13 05:43:05.730276 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730293 | orchestrator | 2026-02-13 05:43:05.730311 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 05:43:05.730327 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.132) 0:03:33.414 ******* 2026-02-13 05:43:05.730344 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730362 | orchestrator | 2026-02-13 05:43:05.730379 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 05:43:05.730395 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.130) 0:03:33.544 ******* 2026-02-13 05:43:05.730409 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.730419 | orchestrator | 2026-02-13 05:43:05.730429 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 05:43:05.730438 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.152) 0:03:33.697 ******* 2026-02-13 05:43:05.730448 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.730458 | orchestrator | 2026-02-13 05:43:05.730467 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 05:43:05.730477 | orchestrator | Friday 13 February 2026 05:43:03 +0000 (0:00:00.165) 0:03:33.862 ******* 2026-02-13 05:43:05.730487 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:05.730520 | orchestrator | 2026-02-13 05:43:05.730532 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-13 05:43:05.730541 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.228) 0:03:34.091 ******* 2026-02-13 05:43:05.730551 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730560 | orchestrator | 2026-02-13 05:43:05.730577 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-13 05:43:05.730587 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.134) 0:03:34.226 ******* 2026-02-13 05:43:05.730597 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730607 | orchestrator | 2026-02-13 05:43:05.730616 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-13 05:43:05.730626 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.128) 0:03:34.355 ******* 2026-02-13 05:43:05.730636 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730645 | orchestrator | 2026-02-13 05:43:05.730655 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-13 05:43:05.730664 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.115) 0:03:34.471 ******* 2026-02-13 05:43:05.730674 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730684 | orchestrator | 2026-02-13 05:43:05.730693 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-13 05:43:05.730703 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.116) 0:03:34.587 ******* 2026-02-13 05:43:05.730713 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730735 | orchestrator | 2026-02-13 05:43:05.730745 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-13 05:43:05.730755 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.123) 0:03:34.711 ******* 2026-02-13 05:43:05.730765 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730775 | orchestrator | 2026-02-13 05:43:05.730784 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-13 05:43:05.730794 | orchestrator | Friday 13 February 2026 05:43:04 +0000 (0:00:00.131) 0:03:34.842 ******* 2026-02-13 05:43:05.730804 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730813 | orchestrator | 2026-02-13 05:43:05.730823 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-13 05:43:05.730834 | orchestrator | Friday 13 February 2026 05:43:05 +0000 (0:00:00.367) 0:03:35.210 ******* 2026-02-13 05:43:05.730843 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730853 | orchestrator | 2026-02-13 05:43:05.730863 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-13 05:43:05.730873 | orchestrator | Friday 13 February 2026 05:43:05 +0000 (0:00:00.130) 0:03:35.341 ******* 2026-02-13 05:43:05.730882 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730895 | orchestrator | 2026-02-13 05:43:05.730905 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-13 05:43:05.730915 | orchestrator | Friday 13 February 2026 05:43:05 +0000 (0:00:00.119) 0:03:35.460 ******* 2026-02-13 05:43:05.730925 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:05.730934 | orchestrator | 2026-02-13 05:43:05.730944 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-13 05:43:05.730954 | orchestrator | Friday 13 February 2026 05:43:05 +0000 (0:00:00.122) 0:03:35.583 ******* 2026-02-13 05:43:24.450399 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.450570 | orchestrator | 2026-02-13 05:43:24.450587 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-13 05:43:24.450598 | orchestrator | Friday 13 February 2026 05:43:05 +0000 (0:00:00.123) 0:03:35.707 ******* 2026-02-13 05:43:24.450607 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.450617 | orchestrator | 2026-02-13 05:43:24.450626 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-13 05:43:24.450635 | orchestrator | Friday 13 February 2026 05:43:06 +0000 (0:00:00.213) 0:03:35.920 ******* 2026-02-13 05:43:24.450643 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.450655 | orchestrator | 2026-02-13 05:43:24.450670 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-13 05:43:24.450685 | orchestrator | Friday 13 February 2026 05:43:07 +0000 (0:00:00.978) 0:03:36.899 ******* 2026-02-13 05:43:24.450699 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.450714 | orchestrator | 2026-02-13 05:43:24.450728 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-13 05:43:24.450742 | orchestrator | Friday 13 February 2026 05:43:08 +0000 (0:00:01.439) 0:03:38.338 ******* 2026-02-13 05:43:24.450756 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-13 05:43:24.450770 | orchestrator | 2026-02-13 05:43:24.450783 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-13 05:43:24.450796 | orchestrator | Friday 13 February 2026 05:43:09 +0000 (0:00:00.574) 0:03:38.913 ******* 2026-02-13 05:43:24.450809 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.450823 | orchestrator | 2026-02-13 05:43:24.450837 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-13 05:43:24.450850 | orchestrator | Friday 13 February 2026 05:43:09 +0000 (0:00:00.134) 0:03:39.048 ******* 2026-02-13 05:43:24.450863 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.450877 | orchestrator | 2026-02-13 05:43:24.450891 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-13 05:43:24.450905 | orchestrator | Friday 13 February 2026 05:43:09 +0000 (0:00:00.126) 0:03:39.175 ******* 2026-02-13 05:43:24.450949 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 05:43:24.450967 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 05:43:24.450984 | orchestrator | 2026-02-13 05:43:24.450999 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-13 05:43:24.451015 | orchestrator | Friday 13 February 2026 05:43:10 +0000 (0:00:01.098) 0:03:40.273 ******* 2026-02-13 05:43:24.451031 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.451046 | orchestrator | 2026-02-13 05:43:24.451061 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-13 05:43:24.451077 | orchestrator | Friday 13 February 2026 05:43:11 +0000 (0:00:00.644) 0:03:40.917 ******* 2026-02-13 05:43:24.451093 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451108 | orchestrator | 2026-02-13 05:43:24.451125 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-13 05:43:24.451157 | orchestrator | Friday 13 February 2026 05:43:11 +0000 (0:00:00.146) 0:03:41.064 ******* 2026-02-13 05:43:24.451168 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451179 | orchestrator | 2026-02-13 05:43:24.451189 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-13 05:43:24.451198 | orchestrator | Friday 13 February 2026 05:43:11 +0000 (0:00:00.138) 0:03:41.202 ******* 2026-02-13 05:43:24.451208 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451218 | orchestrator | 2026-02-13 05:43:24.451229 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-13 05:43:24.451239 | orchestrator | Friday 13 February 2026 05:43:11 +0000 (0:00:00.132) 0:03:41.334 ******* 2026-02-13 05:43:24.451249 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-13 05:43:24.451259 | orchestrator | 2026-02-13 05:43:24.451269 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-13 05:43:24.451279 | orchestrator | Friday 13 February 2026 05:43:12 +0000 (0:00:00.564) 0:03:41.899 ******* 2026-02-13 05:43:24.451288 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.451297 | orchestrator | 2026-02-13 05:43:24.451305 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-13 05:43:24.451314 | orchestrator | Friday 13 February 2026 05:43:12 +0000 (0:00:00.697) 0:03:42.597 ******* 2026-02-13 05:43:24.451323 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 05:43:24.451332 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 05:43:24.451340 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 05:43:24.451349 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451358 | orchestrator | 2026-02-13 05:43:24.451367 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-13 05:43:24.451375 | orchestrator | Friday 13 February 2026 05:43:12 +0000 (0:00:00.141) 0:03:42.739 ******* 2026-02-13 05:43:24.451384 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451393 | orchestrator | 2026-02-13 05:43:24.451401 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-13 05:43:24.451410 | orchestrator | Friday 13 February 2026 05:43:12 +0000 (0:00:00.116) 0:03:42.855 ******* 2026-02-13 05:43:24.451419 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451428 | orchestrator | 2026-02-13 05:43:24.451437 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-13 05:43:24.451445 | orchestrator | Friday 13 February 2026 05:43:13 +0000 (0:00:00.173) 0:03:43.029 ******* 2026-02-13 05:43:24.451454 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451463 | orchestrator | 2026-02-13 05:43:24.451471 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-13 05:43:24.451546 | orchestrator | Friday 13 February 2026 05:43:13 +0000 (0:00:00.143) 0:03:43.173 ******* 2026-02-13 05:43:24.451577 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451587 | orchestrator | 2026-02-13 05:43:24.451596 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-13 05:43:24.451605 | orchestrator | Friday 13 February 2026 05:43:13 +0000 (0:00:00.156) 0:03:43.329 ******* 2026-02-13 05:43:24.451614 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451622 | orchestrator | 2026-02-13 05:43:24.451631 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-13 05:43:24.451640 | orchestrator | Friday 13 February 2026 05:43:13 +0000 (0:00:00.373) 0:03:43.703 ******* 2026-02-13 05:43:24.451649 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.451658 | orchestrator | 2026-02-13 05:43:24.451666 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-13 05:43:24.451675 | orchestrator | Friday 13 February 2026 05:43:15 +0000 (0:00:01.700) 0:03:45.403 ******* 2026-02-13 05:43:24.451684 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.451693 | orchestrator | 2026-02-13 05:43:24.451702 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-13 05:43:24.451711 | orchestrator | Friday 13 February 2026 05:43:15 +0000 (0:00:00.143) 0:03:45.547 ******* 2026-02-13 05:43:24.451719 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-13 05:43:24.451728 | orchestrator | 2026-02-13 05:43:24.451737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-13 05:43:24.451746 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.574) 0:03:46.121 ******* 2026-02-13 05:43:24.451755 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451763 | orchestrator | 2026-02-13 05:43:24.451772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-13 05:43:24.451781 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.149) 0:03:46.270 ******* 2026-02-13 05:43:24.451790 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451799 | orchestrator | 2026-02-13 05:43:24.451808 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-13 05:43:24.451816 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.153) 0:03:46.424 ******* 2026-02-13 05:43:24.451825 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451834 | orchestrator | 2026-02-13 05:43:24.451843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-13 05:43:24.451852 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.150) 0:03:46.575 ******* 2026-02-13 05:43:24.451860 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451869 | orchestrator | 2026-02-13 05:43:24.451878 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-13 05:43:24.451886 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.143) 0:03:46.718 ******* 2026-02-13 05:43:24.451895 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451904 | orchestrator | 2026-02-13 05:43:24.451913 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-13 05:43:24.451921 | orchestrator | Friday 13 February 2026 05:43:16 +0000 (0:00:00.151) 0:03:46.870 ******* 2026-02-13 05:43:24.451930 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451939 | orchestrator | 2026-02-13 05:43:24.451953 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-13 05:43:24.451962 | orchestrator | Friday 13 February 2026 05:43:17 +0000 (0:00:00.150) 0:03:47.020 ******* 2026-02-13 05:43:24.451971 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.451980 | orchestrator | 2026-02-13 05:43:24.451988 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-13 05:43:24.451997 | orchestrator | Friday 13 February 2026 05:43:17 +0000 (0:00:00.149) 0:03:47.170 ******* 2026-02-13 05:43:24.452006 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:24.452015 | orchestrator | 2026-02-13 05:43:24.452023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-13 05:43:24.452032 | orchestrator | Friday 13 February 2026 05:43:17 +0000 (0:00:00.134) 0:03:47.305 ******* 2026-02-13 05:43:24.452046 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:24.452055 | orchestrator | 2026-02-13 05:43:24.452064 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-13 05:43:24.452073 | orchestrator | Friday 13 February 2026 05:43:17 +0000 (0:00:00.485) 0:03:47.790 ******* 2026-02-13 05:43:24.452082 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-13 05:43:24.452091 | orchestrator | 2026-02-13 05:43:24.452099 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-13 05:43:24.452108 | orchestrator | Friday 13 February 2026 05:43:18 +0000 (0:00:00.576) 0:03:48.366 ******* 2026-02-13 05:43:24.452117 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-13 05:43:24.452126 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-13 05:43:24.452135 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-13 05:43:24.452144 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-13 05:43:24.452152 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-13 05:43:24.452161 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-13 05:43:24.452170 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-13 05:43:24.452179 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-13 05:43:24.452188 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 05:43:24.452196 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 05:43:24.452205 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 05:43:24.452214 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 05:43:24.452223 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 05:43:24.452231 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 05:43:24.452246 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-13 05:43:37.318379 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-13 05:43:37.318575 | orchestrator | 2026-02-13 05:43:37.318597 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-13 05:43:37.318609 | orchestrator | Friday 13 February 2026 05:43:24 +0000 (0:00:05.933) 0:03:54.300 ******* 2026-02-13 05:43:37.318619 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318631 | orchestrator | 2026-02-13 05:43:37.318641 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-13 05:43:37.318651 | orchestrator | Friday 13 February 2026 05:43:24 +0000 (0:00:00.134) 0:03:54.434 ******* 2026-02-13 05:43:37.318661 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318670 | orchestrator | 2026-02-13 05:43:37.318680 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-13 05:43:37.318689 | orchestrator | Friday 13 February 2026 05:43:24 +0000 (0:00:00.132) 0:03:54.566 ******* 2026-02-13 05:43:37.318699 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318709 | orchestrator | 2026-02-13 05:43:37.318718 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-13 05:43:37.318728 | orchestrator | Friday 13 February 2026 05:43:24 +0000 (0:00:00.126) 0:03:54.693 ******* 2026-02-13 05:43:37.318738 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318747 | orchestrator | 2026-02-13 05:43:37.318757 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-13 05:43:37.318766 | orchestrator | Friday 13 February 2026 05:43:24 +0000 (0:00:00.129) 0:03:54.822 ******* 2026-02-13 05:43:37.318776 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318785 | orchestrator | 2026-02-13 05:43:37.318795 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-13 05:43:37.318804 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.144) 0:03:54.967 ******* 2026-02-13 05:43:37.318843 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318861 | orchestrator | 2026-02-13 05:43:37.318879 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-13 05:43:37.318897 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.146) 0:03:55.113 ******* 2026-02-13 05:43:37.318914 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.318931 | orchestrator | 2026-02-13 05:43:37.318947 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-13 05:43:37.318965 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.115) 0:03:55.229 ******* 2026-02-13 05:43:37.318984 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319003 | orchestrator | 2026-02-13 05:43:37.319020 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-13 05:43:37.319038 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.138) 0:03:55.368 ******* 2026-02-13 05:43:37.319056 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319073 | orchestrator | 2026-02-13 05:43:37.319090 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-13 05:43:37.319106 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.128) 0:03:55.497 ******* 2026-02-13 05:43:37.319137 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319147 | orchestrator | 2026-02-13 05:43:37.319157 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-13 05:43:37.319166 | orchestrator | Friday 13 February 2026 05:43:25 +0000 (0:00:00.357) 0:03:55.854 ******* 2026-02-13 05:43:37.319176 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319186 | orchestrator | 2026-02-13 05:43:37.319195 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-13 05:43:37.319205 | orchestrator | Friday 13 February 2026 05:43:26 +0000 (0:00:00.139) 0:03:55.994 ******* 2026-02-13 05:43:37.319215 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319224 | orchestrator | 2026-02-13 05:43:37.319234 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-13 05:43:37.319244 | orchestrator | Friday 13 February 2026 05:43:26 +0000 (0:00:00.152) 0:03:56.147 ******* 2026-02-13 05:43:37.319253 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319263 | orchestrator | 2026-02-13 05:43:37.319273 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-13 05:43:37.319282 | orchestrator | Friday 13 February 2026 05:43:26 +0000 (0:00:00.220) 0:03:56.368 ******* 2026-02-13 05:43:37.319292 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319302 | orchestrator | 2026-02-13 05:43:37.319312 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-13 05:43:37.319321 | orchestrator | Friday 13 February 2026 05:43:26 +0000 (0:00:00.132) 0:03:56.500 ******* 2026-02-13 05:43:37.319331 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319341 | orchestrator | 2026-02-13 05:43:37.319351 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-13 05:43:37.319360 | orchestrator | Friday 13 February 2026 05:43:26 +0000 (0:00:00.238) 0:03:56.739 ******* 2026-02-13 05:43:37.319371 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319381 | orchestrator | 2026-02-13 05:43:37.319390 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-13 05:43:37.319400 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.144) 0:03:56.883 ******* 2026-02-13 05:43:37.319410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319419 | orchestrator | 2026-02-13 05:43:37.319429 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 05:43:37.319440 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.123) 0:03:57.007 ******* 2026-02-13 05:43:37.319450 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319460 | orchestrator | 2026-02-13 05:43:37.319469 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 05:43:37.319522 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.130) 0:03:57.137 ******* 2026-02-13 05:43:37.319534 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319544 | orchestrator | 2026-02-13 05:43:37.319576 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 05:43:37.319595 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.132) 0:03:57.270 ******* 2026-02-13 05:43:37.319613 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319629 | orchestrator | 2026-02-13 05:43:37.319646 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 05:43:37.319664 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.139) 0:03:57.410 ******* 2026-02-13 05:43:37.319681 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319695 | orchestrator | 2026-02-13 05:43:37.319708 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 05:43:37.319725 | orchestrator | Friday 13 February 2026 05:43:27 +0000 (0:00:00.134) 0:03:57.544 ******* 2026-02-13 05:43:37.319742 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 05:43:37.319759 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 05:43:37.319775 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 05:43:37.319786 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319795 | orchestrator | 2026-02-13 05:43:37.319805 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 05:43:37.319814 | orchestrator | Friday 13 February 2026 05:43:28 +0000 (0:00:00.664) 0:03:58.209 ******* 2026-02-13 05:43:37.319824 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 05:43:37.319833 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 05:43:37.319843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 05:43:37.319852 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319862 | orchestrator | 2026-02-13 05:43:37.319872 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 05:43:37.319881 | orchestrator | Friday 13 February 2026 05:43:29 +0000 (0:00:00.921) 0:03:59.130 ******* 2026-02-13 05:43:37.319891 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 05:43:37.319901 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 05:43:37.319910 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 05:43:37.319920 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319929 | orchestrator | 2026-02-13 05:43:37.319939 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 05:43:37.319948 | orchestrator | Friday 13 February 2026 05:43:29 +0000 (0:00:00.408) 0:03:59.539 ******* 2026-02-13 05:43:37.319958 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.319967 | orchestrator | 2026-02-13 05:43:37.319977 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 05:43:37.319987 | orchestrator | Friday 13 February 2026 05:43:29 +0000 (0:00:00.150) 0:03:59.689 ******* 2026-02-13 05:43:37.319997 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-13 05:43:37.320006 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.320016 | orchestrator | 2026-02-13 05:43:37.320026 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-13 05:43:37.320035 | orchestrator | Friday 13 February 2026 05:43:30 +0000 (0:00:00.613) 0:04:00.303 ******* 2026-02-13 05:43:37.320052 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:43:37.320062 | orchestrator | 2026-02-13 05:43:37.320072 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-13 05:43:37.320087 | orchestrator | Friday 13 February 2026 05:43:31 +0000 (0:00:00.900) 0:04:01.203 ******* 2026-02-13 05:43:37.320104 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:37.320121 | orchestrator | 2026-02-13 05:43:37.320139 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-13 05:43:37.320168 | orchestrator | Friday 13 February 2026 05:43:31 +0000 (0:00:00.157) 0:04:01.360 ******* 2026-02-13 05:43:37.320185 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-13 05:43:37.320203 | orchestrator | 2026-02-13 05:43:37.320221 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-13 05:43:37.320238 | orchestrator | Friday 13 February 2026 05:43:32 +0000 (0:00:00.608) 0:04:01.969 ******* 2026-02-13 05:43:37.320254 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-13 05:43:37.320270 | orchestrator | 2026-02-13 05:43:37.320280 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-13 05:43:37.320290 | orchestrator | Friday 13 February 2026 05:43:34 +0000 (0:00:02.243) 0:04:04.212 ******* 2026-02-13 05:43:37.320300 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:43:37.320309 | orchestrator | 2026-02-13 05:43:37.320319 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-13 05:43:37.320329 | orchestrator | Friday 13 February 2026 05:43:34 +0000 (0:00:00.157) 0:04:04.370 ******* 2026-02-13 05:43:37.320338 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:37.320348 | orchestrator | 2026-02-13 05:43:37.320358 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-13 05:43:37.320368 | orchestrator | Friday 13 February 2026 05:43:34 +0000 (0:00:00.172) 0:04:04.542 ******* 2026-02-13 05:43:37.320377 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:37.320387 | orchestrator | 2026-02-13 05:43:37.320397 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-13 05:43:37.320406 | orchestrator | Friday 13 February 2026 05:43:35 +0000 (0:00:00.401) 0:04:04.944 ******* 2026-02-13 05:43:37.320416 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:43:37.320426 | orchestrator | 2026-02-13 05:43:37.320436 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-13 05:43:37.320446 | orchestrator | Friday 13 February 2026 05:43:36 +0000 (0:00:01.126) 0:04:06.070 ******* 2026-02-13 05:43:37.320455 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:37.320465 | orchestrator | 2026-02-13 05:43:37.320475 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-13 05:43:37.320512 | orchestrator | Friday 13 February 2026 05:43:36 +0000 (0:00:00.603) 0:04:06.673 ******* 2026-02-13 05:43:37.320522 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:43:37.320531 | orchestrator | 2026-02-13 05:43:37.320552 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-13 05:44:29.994542 | orchestrator | Friday 13 February 2026 05:43:37 +0000 (0:00:00.502) 0:04:07.175 ******* 2026-02-13 05:44:29.994690 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.994718 | orchestrator | 2026-02-13 05:44:29.994738 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-13 05:44:29.994757 | orchestrator | Friday 13 February 2026 05:43:37 +0000 (0:00:00.498) 0:04:07.674 ******* 2026-02-13 05:44:29.994775 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.994793 | orchestrator | 2026-02-13 05:44:29.994810 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-13 05:44:29.994829 | orchestrator | Friday 13 February 2026 05:43:38 +0000 (0:00:00.766) 0:04:08.441 ******* 2026-02-13 05:44:29.994846 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.994863 | orchestrator | 2026-02-13 05:44:29.994881 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-13 05:44:29.994900 | orchestrator | Friday 13 February 2026 05:43:39 +0000 (0:00:00.770) 0:04:09.211 ******* 2026-02-13 05:44:29.994917 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-13 05:44:29.994937 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 05:44:29.994955 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 05:44:29.994973 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-13 05:44:29.994992 | orchestrator | 2026-02-13 05:44:29.995010 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-13 05:44:29.995059 | orchestrator | Friday 13 February 2026 05:43:42 +0000 (0:00:02.772) 0:04:11.984 ******* 2026-02-13 05:44:29.995081 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:44:29.995100 | orchestrator | 2026-02-13 05:44:29.995119 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-13 05:44:29.995137 | orchestrator | Friday 13 February 2026 05:43:43 +0000 (0:00:01.063) 0:04:13.047 ******* 2026-02-13 05:44:29.995152 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995163 | orchestrator | 2026-02-13 05:44:29.995175 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-13 05:44:29.995185 | orchestrator | Friday 13 February 2026 05:43:43 +0000 (0:00:00.135) 0:04:13.183 ******* 2026-02-13 05:44:29.995196 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995206 | orchestrator | 2026-02-13 05:44:29.995216 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-13 05:44:29.995226 | orchestrator | Friday 13 February 2026 05:43:43 +0000 (0:00:00.145) 0:04:13.328 ******* 2026-02-13 05:44:29.995235 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995245 | orchestrator | 2026-02-13 05:44:29.995255 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-13 05:44:29.995265 | orchestrator | Friday 13 February 2026 05:43:44 +0000 (0:00:01.003) 0:04:14.332 ******* 2026-02-13 05:44:29.995274 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995284 | orchestrator | 2026-02-13 05:44:29.995294 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-13 05:44:29.995304 | orchestrator | Friday 13 February 2026 05:43:44 +0000 (0:00:00.506) 0:04:14.838 ******* 2026-02-13 05:44:29.995313 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:44:29.995323 | orchestrator | 2026-02-13 05:44:29.995349 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-13 05:44:29.995360 | orchestrator | Friday 13 February 2026 05:43:45 +0000 (0:00:00.374) 0:04:15.213 ******* 2026-02-13 05:44:29.995370 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-13 05:44:29.995381 | orchestrator | 2026-02-13 05:44:29.995391 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-13 05:44:29.995400 | orchestrator | Friday 13 February 2026 05:43:45 +0000 (0:00:00.568) 0:04:15.781 ******* 2026-02-13 05:44:29.995410 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:44:29.995420 | orchestrator | 2026-02-13 05:44:29.995429 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-13 05:44:29.995439 | orchestrator | Friday 13 February 2026 05:43:46 +0000 (0:00:00.132) 0:04:15.914 ******* 2026-02-13 05:44:29.995449 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:44:29.995487 | orchestrator | 2026-02-13 05:44:29.995504 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-13 05:44:29.995521 | orchestrator | Friday 13 February 2026 05:43:46 +0000 (0:00:00.136) 0:04:16.050 ******* 2026-02-13 05:44:29.995538 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-13 05:44:29.995555 | orchestrator | 2026-02-13 05:44:29.995572 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-13 05:44:29.995589 | orchestrator | Friday 13 February 2026 05:43:46 +0000 (0:00:00.574) 0:04:16.625 ******* 2026-02-13 05:44:29.995607 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:44:29.995622 | orchestrator | 2026-02-13 05:44:29.995632 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-13 05:44:29.995642 | orchestrator | Friday 13 February 2026 05:43:48 +0000 (0:00:01.338) 0:04:17.963 ******* 2026-02-13 05:44:29.995651 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995661 | orchestrator | 2026-02-13 05:44:29.995671 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-13 05:44:29.995681 | orchestrator | Friday 13 February 2026 05:43:49 +0000 (0:00:01.009) 0:04:18.973 ******* 2026-02-13 05:44:29.995690 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995713 | orchestrator | 2026-02-13 05:44:29.995723 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-13 05:44:29.995732 | orchestrator | Friday 13 February 2026 05:43:50 +0000 (0:00:01.414) 0:04:20.388 ******* 2026-02-13 05:44:29.995742 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:44:29.995752 | orchestrator | 2026-02-13 05:44:29.995761 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-13 05:44:29.995772 | orchestrator | Friday 13 February 2026 05:43:52 +0000 (0:00:02.257) 0:04:22.645 ******* 2026-02-13 05:44:29.995782 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-13 05:44:29.995792 | orchestrator | 2026-02-13 05:44:29.995821 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-13 05:44:29.995831 | orchestrator | Friday 13 February 2026 05:43:53 +0000 (0:00:00.572) 0:04:23.217 ******* 2026-02-13 05:44:29.995841 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-13 05:44:29.995851 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995861 | orchestrator | 2026-02-13 05:44:29.995871 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-13 05:44:29.995880 | orchestrator | Friday 13 February 2026 05:44:15 +0000 (0:00:22.157) 0:04:45.375 ******* 2026-02-13 05:44:29.995890 | orchestrator | ok: [testbed-node-0] 2026-02-13 05:44:29.995899 | orchestrator | 2026-02-13 05:44:29.995909 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-13 05:44:29.995918 | orchestrator | Friday 13 February 2026 05:44:17 +0000 (0:00:02.008) 0:04:47.384 ******* 2026-02-13 05:44:29.995928 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:44:29.995937 | orchestrator | 2026-02-13 05:44:29.995947 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-13 05:44:29.995956 | orchestrator | Friday 13 February 2026 05:44:17 +0000 (0:00:00.136) 0:04:47.521 ******* 2026-02-13 05:44:29.995968 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-13 05:44:29.995982 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-13 05:44:29.995992 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-13 05:44:29.996008 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-13 05:44:29.996020 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-13 05:44:29.996031 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__f00315770a5eaab1008c40010aeb9bd735c734b8'}])  2026-02-13 05:44:29.996050 | orchestrator | 2026-02-13 05:44:29.996060 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-13 05:44:29.996069 | orchestrator | Friday 13 February 2026 05:44:27 +0000 (0:00:09.363) 0:04:56.885 ******* 2026-02-13 05:44:29.996082 | orchestrator | changed: [testbed-node-0] 2026-02-13 05:44:29.996099 | orchestrator | 2026-02-13 05:44:29.996118 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 05:44:29.996142 | orchestrator | Friday 13 February 2026 05:44:28 +0000 (0:00:01.428) 0:04:58.313 ******* 2026-02-13 05:44:29.996158 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 05:44:29.996173 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 05:44:29.996189 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 05:44:29.996205 | orchestrator | 2026-02-13 05:44:29.996218 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 05:44:29.996233 | orchestrator | Friday 13 February 2026 05:44:29 +0000 (0:00:01.080) 0:04:59.394 ******* 2026-02-13 05:44:29.996250 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 05:44:29.996265 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 05:44:29.996281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 05:44:29.996297 | orchestrator | skipping: [testbed-node-0] 2026-02-13 05:44:29.996314 | orchestrator | 2026-02-13 05:44:29.996342 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-13 06:15:49.209842 | orchestrator | Friday 13 February 2026 05:44:29 +0000 (0:00:00.458) 0:04:59.852 ******* 2026-02-13 06:15:49.209937 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:15:49.209950 | orchestrator | 2026-02-13 06:15:49.209959 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-13 06:15:49.209967 | orchestrator | Friday 13 February 2026 05:44:30 +0000 (0:00:00.132) 0:04:59.985 ******* 2026-02-13 06:15:49.209974 | orchestrator | 2026-02-13 06:15:49.209981 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.209988 | orchestrator | 2026-02-13 06:15:49.209995 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210002 | orchestrator | 2026-02-13 06:15:49.210009 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210055 | orchestrator | 2026-02-13 06:15:49.210062 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210069 | orchestrator | 2026-02-13 06:15:49.210076 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210083 | orchestrator | 2026-02-13 06:15:49.210090 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210097 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-13 06:15:49.210105 | orchestrator | 2026-02-13 06:15:49.210111 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210118 | orchestrator | 2026-02-13 06:15:49.210125 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210131 | orchestrator | 2026-02-13 06:15:49.210141 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210175 | orchestrator | 2026-02-13 06:15:49.210188 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210199 | orchestrator | 2026-02-13 06:15:49.210209 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210220 | orchestrator | 2026-02-13 06:15:49.210230 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210239 | orchestrator | 2026-02-13 06:15:49.210250 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210262 | orchestrator | 2026-02-13 06:15:49.210274 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210286 | orchestrator | 2026-02-13 06:15:49.210297 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210310 | orchestrator | 2026-02-13 06:15:49.210338 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210353 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-13 06:15:49.210366 | orchestrator | 2026-02-13 06:15:49.210375 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210383 | orchestrator | 2026-02-13 06:15:49.210390 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210399 | orchestrator | 2026-02-13 06:15:49.210407 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210415 | orchestrator | 2026-02-13 06:15:49.210424 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210432 | orchestrator | 2026-02-13 06:15:49.210440 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210448 | orchestrator | 2026-02-13 06:15:49.210456 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210464 | orchestrator | 2026-02-13 06:15:49.210473 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210481 | orchestrator | 2026-02-13 06:15:49.210489 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210497 | orchestrator | 2026-02-13 06:15:49.210505 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210513 | orchestrator | 2026-02-13 06:15:49.210522 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210530 | orchestrator | 2026-02-13 06:15:49.210537 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-13 06:15:49.210554 | orchestrator | 2026-02-13 06:15:49.210562 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210571 | orchestrator | 2026-02-13 06:15:49.210579 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210587 | orchestrator | 2026-02-13 06:15:49.210595 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210603 | orchestrator | 2026-02-13 06:15:49.210611 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210619 | orchestrator | 2026-02-13 06:15:49.210644 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210660 | orchestrator | 2026-02-13 06:15:49.210669 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210677 | orchestrator | 2026-02-13 06:15:49.210707 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210717 | orchestrator | 2026-02-13 06:15:49.210724 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210733 | orchestrator | 2026-02-13 06:15:49.210741 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210749 | orchestrator | 2026-02-13 06:15:49.210757 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210766 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-13 06:15:49.210774 | orchestrator | 2026-02-13 06:15:49.210781 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210788 | orchestrator | 2026-02-13 06:15:49.210795 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210802 | orchestrator | 2026-02-13 06:15:49.210809 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210816 | orchestrator | 2026-02-13 06:15:49.210823 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210830 | orchestrator | 2026-02-13 06:15:49.210837 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210845 | orchestrator | 2026-02-13 06:15:49.210852 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210859 | orchestrator | 2026-02-13 06:15:49.210866 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210873 | orchestrator | 2026-02-13 06:15:49.210880 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210887 | orchestrator | 2026-02-13 06:15:49.210894 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210901 | orchestrator | 2026-02-13 06:15:49.210908 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210915 | orchestrator | 2026-02-13 06:15:49.210922 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210934 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-13 06:15:49.210941 | orchestrator | 2026-02-13 06:15:49.210948 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210956 | orchestrator | 2026-02-13 06:15:49.210963 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210970 | orchestrator | 2026-02-13 06:15:49.210977 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210984 | orchestrator | 2026-02-13 06:15:49.210991 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.210998 | orchestrator | 2026-02-13 06:15:49.211005 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211012 | orchestrator | 2026-02-13 06:15:49.211019 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211031 | orchestrator | 2026-02-13 06:15:49.211038 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211046 | orchestrator | 2026-02-13 06:15:49.211053 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211060 | orchestrator | 2026-02-13 06:15:49.211067 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211074 | orchestrator | 2026-02-13 06:15:49.211081 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:15:49.211088 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-13 06:15:49.211096 | orchestrator | (): 'cd5d68ca-aa8c-b071-2a87-000000000297' 2026-02-13 06:15:49.211119 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.293154", "end": "2026-02-13 06:15:48.961124", "msg": "non-zero return code", "rc": 1, "start": "2026-02-13 06:10:48.667970", "stderr": "2026-02-13T06:15:48.941+0000 7d2c3fa5c640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-13T06:15:48.941+0000 7d2c3fa5c640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-13 06:15:53.571805 | orchestrator | 2026-02-13 06:15:53 | INFO  | Task 7980b5c4-7eee-4585-a03e-56bcd64e7cc4 (ceph-rolling_update) was prepared for execution. 2026-02-13 06:15:53.571928 | orchestrator | 2026-02-13 06:15:53 | INFO  | It takes a moment until task 7980b5c4-7eee-4585-a03e-56bcd64e7cc4 (ceph-rolling_update) has been started and output is visible here. 2026-02-13 06:16:55.726941 | orchestrator | 2026-02-13 06:16:55.727037 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-13 06:16:55.727049 | orchestrator | Friday 13 February 2026 06:15:49 +0000 (0:31:19.085) 0:36:19.070 ******* 2026-02-13 06:16:55.727057 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:16:55.727065 | orchestrator | 2026-02-13 06:16:55.727073 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-13 06:16:55.727080 | orchestrator | Friday 13 February 2026 06:15:50 +0000 (0:00:00.861) 0:36:19.932 ******* 2026-02-13 06:16:55.727088 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:16:55.727095 | orchestrator | 2026-02-13 06:16:55.727103 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-13 06:16:55.727110 | orchestrator | Friday 13 February 2026 06:15:51 +0000 (0:00:01.105) 0:36:21.037 ******* 2026-02-13 06:16:55.727118 | orchestrator | [WARNING]: Failure using method (v2_runner_on_failed) in callback plugin 2026-02-13 06:16:55.727126 | orchestrator | (): 'cd5d68ca-aa8c-b071-2a87-0000000002a2' 2026-02-13 06:16:55.727142 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-13 06:16:55.727150 | orchestrator | 2026-02-13 06:16:55.727157 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 06:16:55.727165 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 06:16:55.727173 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:16:55.727180 | orchestrator | testbed-node-0 : ok=121  changed=10  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-13 06:16:55.727219 | orchestrator | testbed-node-1 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:16:55.727227 | orchestrator | testbed-node-2 : ok=25  changed=2  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:16:55.727234 | orchestrator | testbed-node-3 : ok=33  changed=2  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-13 06:16:55.727242 | orchestrator | testbed-node-4 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-13 06:16:55.727249 | orchestrator | testbed-node-5 : ok=33  changed=2  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-13 06:16:55.727256 | orchestrator | 2026-02-13 06:16:55.727263 | orchestrator | 2026-02-13 06:16:55.727271 | orchestrator | 2026-02-13 06:16:55.727278 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 06:16:55.727286 | orchestrator | Friday 13 February 2026 06:15:52 +0000 (0:00:01.818) 0:36:22.856 ******* 2026-02-13 06:16:55.727293 | orchestrator | =============================================================================== 2026-02-13 06:16:55.727300 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1879.09s 2026-02-13 06:16:55.727307 | orchestrator | Gather and delegate facts ---------------------------------------------- 29.36s 2026-02-13 06:16:55.727315 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.16s 2026-02-13 06:16:55.727322 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 12.39s 2026-02-13 06:16:55.727329 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.18s 2026-02-13 06:16:55.727336 | orchestrator | Set cluster configs ----------------------------------------------------- 9.95s 2026-02-13 06:16:55.727343 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.36s 2026-02-13 06:16:55.727351 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.93s 2026-02-13 06:16:55.727358 | orchestrator | Gather facts ------------------------------------------------------------ 3.56s 2026-02-13 06:16:55.727365 | orchestrator | Stop ceph mon ----------------------------------------------------------- 2.81s 2026-02-13 06:16:55.727372 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 2.77s 2026-02-13 06:16:55.727380 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 2.41s 2026-02-13 06:16:55.727387 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 2.33s 2026-02-13 06:16:55.727394 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 2.26s 2026-02-13 06:16:55.727402 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 2.24s 2026-02-13 06:16:55.727409 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.24s 2026-02-13 06:16:55.727416 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.18s 2026-02-13 06:16:55.727423 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 2.08s 2026-02-13 06:16:55.727431 | orchestrator | ceph-container-engine : Include pre_requisites/prerequisites.yml -------- 2.03s 2026-02-13 06:16:55.727451 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 2.01s 2026-02-13 06:16:55.727459 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-13 06:16:55.727467 | orchestrator | 2.16.14 2026-02-13 06:16:55.727474 | orchestrator | 2026-02-13 06:16:55.727482 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-13 06:16:55.727489 | orchestrator | 2026-02-13 06:16:55.727497 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-13 06:16:55.727511 | orchestrator | Friday 13 February 2026 06:16:00 +0000 (0:00:01.400) 0:00:01.400 ******* 2026-02-13 06:16:55.727520 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-13 06:16:55.727528 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-13 06:16:55.727537 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-13 06:16:55.727545 | orchestrator | skipping: [localhost] 2026-02-13 06:16:55.727553 | orchestrator | 2026-02-13 06:16:55.727566 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-13 06:16:55.727578 | orchestrator | 2026-02-13 06:16:55.727590 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-13 06:16:55.727602 | orchestrator | Friday 13 February 2026 06:16:02 +0000 (0:00:01.852) 0:00:03.252 ******* 2026-02-13 06:16:55.727614 | orchestrator | ok: [testbed-node-0] => { 2026-02-13 06:16:55.727626 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727639 | orchestrator | } 2026-02-13 06:16:55.727651 | orchestrator | ok: [testbed-node-1] => { 2026-02-13 06:16:55.727665 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727677 | orchestrator | } 2026-02-13 06:16:55.727709 | orchestrator | ok: [testbed-node-2] => { 2026-02-13 06:16:55.727723 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727736 | orchestrator | } 2026-02-13 06:16:55.727746 | orchestrator | ok: [testbed-node-3] => { 2026-02-13 06:16:55.727755 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727762 | orchestrator | } 2026-02-13 06:16:55.727769 | orchestrator | ok: [testbed-node-4] => { 2026-02-13 06:16:55.727777 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727784 | orchestrator | } 2026-02-13 06:16:55.727791 | orchestrator | ok: [testbed-node-5] => { 2026-02-13 06:16:55.727804 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727812 | orchestrator | } 2026-02-13 06:16:55.727819 | orchestrator | ok: [testbed-manager] => { 2026-02-13 06:16:55.727827 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-13 06:16:55.727834 | orchestrator | } 2026-02-13 06:16:55.727842 | orchestrator | 2026-02-13 06:16:55.727849 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-13 06:16:55.727856 | orchestrator | Friday 13 February 2026 06:16:08 +0000 (0:00:05.561) 0:00:08.814 ******* 2026-02-13 06:16:55.727864 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:16:55.727871 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:16:55.727878 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:16:55.727886 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:16:55.727893 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:16:55.727900 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:16:55.727908 | orchestrator | ok: [testbed-manager] 2026-02-13 06:16:55.727915 | orchestrator | 2026-02-13 06:16:55.727922 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-13 06:16:55.727930 | orchestrator | Friday 13 February 2026 06:16:14 +0000 (0:00:06.036) 0:00:14.850 ******* 2026-02-13 06:16:55.727937 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:16:55.727945 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 06:16:55.727952 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:16:55.727960 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 06:16:55.727967 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:16:55.727974 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 06:16:55.727989 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 06:16:55.727996 | orchestrator | 2026-02-13 06:16:55.728004 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-13 06:16:55.728011 | orchestrator | Friday 13 February 2026 06:16:48 +0000 (0:00:33.932) 0:00:48.782 ******* 2026-02-13 06:16:55.728018 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:16:55.728026 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:16:55.728033 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:16:55.728040 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:16:55.728048 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:16:55.728055 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:16:55.728062 | orchestrator | ok: [testbed-manager] 2026-02-13 06:16:55.728069 | orchestrator | 2026-02-13 06:16:55.728077 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 06:16:55.728084 | orchestrator | Friday 13 February 2026 06:16:50 +0000 (0:00:02.140) 0:00:50.923 ******* 2026-02-13 06:16:55.728091 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 06:16:55.728099 | orchestrator | 2026-02-13 06:16:55.728106 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 06:16:55.728114 | orchestrator | Friday 13 February 2026 06:16:52 +0000 (0:00:02.662) 0:00:53.586 ******* 2026-02-13 06:16:55.728121 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:16:55.728128 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:16:55.728136 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:16:55.728143 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:16:55.728150 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:16:55.728164 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.061923 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062008 | orchestrator | 2026-02-13 06:17:22.062050 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 06:17:22.062058 | orchestrator | Friday 13 February 2026 06:16:55 +0000 (0:00:02.824) 0:00:56.411 ******* 2026-02-13 06:17:22.062064 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062069 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062075 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062080 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062085 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062091 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062096 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062101 | orchestrator | 2026-02-13 06:17:22.062106 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 06:17:22.062111 | orchestrator | Friday 13 February 2026 06:16:57 +0000 (0:00:01.852) 0:00:58.263 ******* 2026-02-13 06:17:22.062116 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062122 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062127 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062132 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062137 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062142 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062147 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062152 | orchestrator | 2026-02-13 06:17:22.062157 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 06:17:22.062162 | orchestrator | Friday 13 February 2026 06:17:00 +0000 (0:00:02.651) 0:01:00.915 ******* 2026-02-13 06:17:22.062168 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062173 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062178 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062183 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062188 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062193 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062199 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062208 | orchestrator | 2026-02-13 06:17:22.062216 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 06:17:22.062244 | orchestrator | Friday 13 February 2026 06:17:02 +0000 (0:00:01.833) 0:01:02.749 ******* 2026-02-13 06:17:22.062253 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062261 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062269 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062277 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062285 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062294 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062302 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062311 | orchestrator | 2026-02-13 06:17:22.062331 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 06:17:22.062337 | orchestrator | Friday 13 February 2026 06:17:04 +0000 (0:00:02.010) 0:01:04.760 ******* 2026-02-13 06:17:22.062342 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062347 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062352 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062357 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062362 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062367 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062372 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062378 | orchestrator | 2026-02-13 06:17:22.062383 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 06:17:22.062389 | orchestrator | Friday 13 February 2026 06:17:05 +0000 (0:00:01.861) 0:01:06.621 ******* 2026-02-13 06:17:22.062394 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:22.062400 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:22.062405 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:22.062410 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:22.062415 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:22.062422 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:22.062430 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:22.062438 | orchestrator | 2026-02-13 06:17:22.062447 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 06:17:22.062455 | orchestrator | Friday 13 February 2026 06:17:07 +0000 (0:00:01.848) 0:01:08.470 ******* 2026-02-13 06:17:22.062463 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062472 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062480 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062485 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062492 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062497 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062504 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062510 | orchestrator | 2026-02-13 06:17:22.062516 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 06:17:22.062522 | orchestrator | Friday 13 February 2026 06:17:09 +0000 (0:00:01.902) 0:01:10.373 ******* 2026-02-13 06:17:22.062528 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:17:22.062535 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:17:22.062541 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:17:22.062547 | orchestrator | 2026-02-13 06:17:22.062553 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 06:17:22.062559 | orchestrator | Friday 13 February 2026 06:17:11 +0000 (0:00:01.579) 0:01:11.952 ******* 2026-02-13 06:17:22.062565 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:22.062571 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:22.062577 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:22.062583 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:22.062589 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:22.062595 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:22.062602 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:22.062607 | orchestrator | 2026-02-13 06:17:22.062613 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 06:17:22.062619 | orchestrator | Friday 13 February 2026 06:17:13 +0000 (0:00:01.938) 0:01:13.890 ******* 2026-02-13 06:17:22.062631 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:17:22.062637 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:17:22.062643 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:17:22.062649 | orchestrator | 2026-02-13 06:17:22.062655 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 06:17:22.062673 | orchestrator | Friday 13 February 2026 06:17:16 +0000 (0:00:03.228) 0:01:17.119 ******* 2026-02-13 06:17:22.062680 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 06:17:22.062686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 06:17:22.062731 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 06:17:22.062737 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:22.062743 | orchestrator | 2026-02-13 06:17:22.062749 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 06:17:22.062756 | orchestrator | Friday 13 February 2026 06:17:17 +0000 (0:00:01.434) 0:01:18.554 ******* 2026-02-13 06:17:22.062762 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062784 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:22.062790 | orchestrator | 2026-02-13 06:17:22.062796 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 06:17:22.062802 | orchestrator | Friday 13 February 2026 06:17:19 +0000 (0:00:01.822) 0:01:20.377 ******* 2026-02-13 06:17:22.062813 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062822 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062829 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:22.062835 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:22.062842 | orchestrator | 2026-02-13 06:17:22.062848 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 06:17:22.062853 | orchestrator | Friday 13 February 2026 06:17:20 +0000 (0:00:01.159) 0:01:21.537 ******* 2026-02-13 06:17:22.062860 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7bdd5a857154', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 06:17:13.832563', 'end': '2026-02-13 06:17:13.879570', 'delta': '0:00:00.047007', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7bdd5a857154'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-13 06:17:22.062878 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8f8955ec790', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 06:17:14.677179', 'end': '2026-02-13 06:17:14.723173', 'delta': '0:00:00.045994', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8f8955ec790'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-13 06:17:50.937220 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '30f78d02966b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 06:17:15.259153', 'end': '2026-02-13 06:17:15.294951', 'delta': '0:00:00.035798', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['30f78d02966b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-13 06:17:50.937328 | orchestrator | 2026-02-13 06:17:50.937344 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 06:17:50.937356 | orchestrator | Friday 13 February 2026 06:17:22 +0000 (0:00:01.208) 0:01:22.745 ******* 2026-02-13 06:17:50.937367 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:50.937378 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:50.937387 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:50.937397 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:50.937407 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:50.937416 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:50.937426 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:50.937436 | orchestrator | 2026-02-13 06:17:50.937446 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 06:17:50.937456 | orchestrator | Friday 13 February 2026 06:17:24 +0000 (0:00:02.491) 0:01:25.237 ******* 2026-02-13 06:17:50.937466 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.937477 | orchestrator | 2026-02-13 06:17:50.937502 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 06:17:50.937513 | orchestrator | Friday 13 February 2026 06:17:25 +0000 (0:00:01.299) 0:01:26.536 ******* 2026-02-13 06:17:50.937523 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:50.937532 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:50.937542 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:50.937552 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:50.937562 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:50.937571 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:50.937581 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:50.937591 | orchestrator | 2026-02-13 06:17:50.937601 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 06:17:50.937632 | orchestrator | Friday 13 February 2026 06:17:27 +0000 (0:00:02.070) 0:01:28.607 ******* 2026-02-13 06:17:50.937642 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:50.937652 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937662 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937671 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937681 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937690 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937740 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-13 06:17:50.937751 | orchestrator | 2026-02-13 06:17:50.937761 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 06:17:50.937770 | orchestrator | Friday 13 February 2026 06:17:31 +0000 (0:00:03.456) 0:01:32.064 ******* 2026-02-13 06:17:50.937780 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:17:50.937789 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:17:50.937802 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:17:50.937819 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:17:50.937837 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:17:50.937852 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:17:50.937866 | orchestrator | ok: [testbed-manager] 2026-02-13 06:17:50.937883 | orchestrator | 2026-02-13 06:17:50.937898 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 06:17:50.937914 | orchestrator | Friday 13 February 2026 06:17:33 +0000 (0:00:02.164) 0:01:34.228 ******* 2026-02-13 06:17:50.937931 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.937947 | orchestrator | 2026-02-13 06:17:50.937963 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 06:17:50.937979 | orchestrator | Friday 13 February 2026 06:17:34 +0000 (0:00:01.230) 0:01:35.459 ******* 2026-02-13 06:17:50.937995 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938079 | orchestrator | 2026-02-13 06:17:50.938102 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 06:17:50.938119 | orchestrator | Friday 13 February 2026 06:17:35 +0000 (0:00:01.217) 0:01:36.677 ******* 2026-02-13 06:17:50.938132 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938142 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938152 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938161 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938171 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938181 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938190 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938200 | orchestrator | 2026-02-13 06:17:50.938210 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 06:17:50.938220 | orchestrator | Friday 13 February 2026 06:17:38 +0000 (0:00:02.580) 0:01:39.257 ******* 2026-02-13 06:17:50.938230 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938239 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938249 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938258 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938268 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938278 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938288 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938297 | orchestrator | 2026-02-13 06:17:50.938307 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 06:17:50.938336 | orchestrator | Friday 13 February 2026 06:17:40 +0000 (0:00:02.053) 0:01:41.311 ******* 2026-02-13 06:17:50.938346 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938356 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938365 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938375 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938385 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938395 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938414 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938423 | orchestrator | 2026-02-13 06:17:50.938433 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 06:17:50.938443 | orchestrator | Friday 13 February 2026 06:17:42 +0000 (0:00:02.061) 0:01:43.372 ******* 2026-02-13 06:17:50.938452 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938461 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938471 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938480 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938490 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938499 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938508 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938518 | orchestrator | 2026-02-13 06:17:50.938527 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 06:17:50.938537 | orchestrator | Friday 13 February 2026 06:17:44 +0000 (0:00:01.882) 0:01:45.255 ******* 2026-02-13 06:17:50.938546 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938556 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938565 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938575 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938584 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938593 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938603 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938612 | orchestrator | 2026-02-13 06:17:50.938622 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 06:17:50.938631 | orchestrator | Friday 13 February 2026 06:17:46 +0000 (0:00:02.122) 0:01:47.378 ******* 2026-02-13 06:17:50.938641 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938658 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938668 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938677 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938686 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938754 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938773 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938790 | orchestrator | 2026-02-13 06:17:50.938807 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 06:17:50.938820 | orchestrator | Friday 13 February 2026 06:17:48 +0000 (0:00:01.887) 0:01:49.265 ******* 2026-02-13 06:17:50.938829 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:50.938839 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:50.938848 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:50.938858 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:50.938868 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:50.938877 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:50.938886 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:50.938896 | orchestrator | 2026-02-13 06:17:50.938906 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 06:17:50.938915 | orchestrator | Friday 13 February 2026 06:17:50 +0000 (0:00:02.235) 0:01:51.501 ******* 2026-02-13 06:17:50.938927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:50.938940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:50.938958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:50.938970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:50.938990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.051515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:51.051644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.051759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e7782c1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.428181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428266 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:51.428274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:51.428318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428330 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '70bc5ce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.428368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428381 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:51.428387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.428393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'uuids': ['e40d66eb-8e66-4883-be8d-d975a39e8f71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE']}})  2026-02-13 06:17:51.428408 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4e1fd529', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.570248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f']}})  2026-02-13 06:17:51.570430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570484 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:51.570511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:51.570537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM', 'dm-uuid-CRYPT-LUKS2-f8c9b83f530a4ae8b2d9ba3a7349e63b-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'uuids': ['f8c9b83f-530a-4ae8-b2d9-ba3a7349e63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM']}})  2026-02-13 06:17:51.570671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab']}})  2026-02-13 06:17:51.570690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.570776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd82ec97d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.570810 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.783998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE', 'dm-uuid-CRYPT-LUKS2-e40d66eb8e664883be8dd975a39e8f71-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'uuids': ['08a6103f-7fcb-4231-b947-0f95a49b9065'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g']}})  2026-02-13 06:17:51.784145 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b26d7d0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.784177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6']}})  2026-02-13 06:17:51.784190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:51.784267 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:51.784280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI', 'dm-uuid-CRYPT-LUKS2-b79d0c525d1a4583b35f4aeb5a2ac24e-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784303 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.784315 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'uuids': ['b79d0c52-5d1a-4583-b35f-4aeb5a2ac24e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI']}})  2026-02-13 06:17:51.784358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f']}})  2026-02-13 06:17:51.784387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6ae2313', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.970240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970269 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g', 'dm-uuid-CRYPT-LUKS2-08a6103f7fcb4231b9470f95a49b9065-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'uuids': ['3a3054ab-e73d-4dec-b96d-e7c980380425'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw']}})  2026-02-13 06:17:51.970450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53853b9a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:51.970473 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6']}})  2026-02-13 06:17:51.970495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:51.970583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:51.970619 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT', 'dm-uuid-CRYPT-LUKS2-6c8d9b65364e41e0b393c831fad91b63-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197191 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:53.197322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'uuids': ['6c8d9b65-364e-41e0-b393-c831fad91b63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT']}})  2026-02-13 06:17:53.197359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1']}})  2026-02-13 06:17:53.197372 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd8b8514', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:53.197466 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197478 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197501 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197520 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw', 'dm-uuid-CRYPT-LUKS2-3a3054abe73d4decb96de7c980380425-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-13 06:17:53.197550 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:17:53.197562 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-26-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:17:53.197582 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.334379 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.334509 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.334565 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f5b10e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:17:53.334622 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.334644 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:17:53.334664 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:17:53.334684 | orchestrator | 2026-02-13 06:17:53.334805 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 06:17:53.334828 | orchestrator | Friday 13 February 2026 06:17:53 +0000 (0:00:02.387) 0:01:53.888 ******* 2026-02-13 06:17:53.334867 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334880 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334892 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334917 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334951 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.334973 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526332 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526339 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:17:53.526360 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526366 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526372 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526387 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-44-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526397 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526403 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526410 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.526423 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1e7782c1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1', 'scsi-SQEMU_QEMU_HARDDISK_1e7782c1-d478-46d9-a0ec-d13f1d0cd82b-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.672808 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.672918 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.672939 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:17:53.672953 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.672966 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673015 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673031 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-43-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673072 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673082 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673090 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '70bc5ce7', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1', 'scsi-SQEMU_QEMU_HARDDISK_70bc5ce7-ef2b-48d3-8c75-27accd01fe36-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:53.673126 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148340 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148428 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:17:54.148440 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148450 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab', 'dm-uuid-LVM-rnSZIgArmxAmbcLvOJFLEn8mgwYRnXlE3olXViRUdTa1K1tyYaVS99W21lGqyhJE'], 'uuids': ['e40d66eb-8e66-4883-be8d-d975a39e8f71'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148487 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226', 'scsi-SQEMU_QEMU_HARDDISK_4e1fd529-f92d-4aae-9efe-84acf01c9226'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4e1fd529', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148509 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-09kMNs-4MO2-JNQz-8aT0-f4so-6Z9I-fZuQQ1', 'scsi-0QEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165', 'scsi-SQEMU_QEMU_HARDDISK_48ecca72-7ee3-4b3a-9d71-2cc28b178165'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148535 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148544 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148552 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-48-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148566 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148573 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM', 'dm-uuid-CRYPT-LUKS2-f8c9b83f530a4ae8b2d9ba3a7349e63b-PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148581 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.148598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--90d7f9ba--9289--5e80--9038--1ad4979f4e3f-osd--block--90d7f9ba--9289--5e80--9038--1ad4979f4e3f', 'dm-uuid-LVM-NgeS2OAf1eQbq2fjon94hTyRASj6CjzqPJD89JdnKlkkAQnNMDwPk0jJQkfrVtCM'], 'uuids': ['f8c9b83f-530a-4ae8-b2d9-ba3a7349e63b'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '48ecca72', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PJD89J-dnKl-kkAQ-nNMD-wPk0-jJQk-frVtCM']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296236 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296364 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f', 'dm-uuid-LVM-RYX1Dlxf1hzjqbJFMgqiTL3FjKVcMxwPPZJAxrorT0BeTcQP51a9OdG0Vnk33f2g'], 'uuids': ['08a6103f-7fcb-4231-b947-0f95a49b9065'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296382 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-NVJFab-TDNv-OZxQ-P7ah-aykU-eVq3-5VieAW', 'scsi-0QEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322', 'scsi-SQEMU_QEMU_HARDDISK_a697f046-4fd0-4ab4-8d74-c390a778d322'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a697f046', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--7c5ad083--16ef--5861--9238--a28b124c66ab-osd--block--7c5ad083--16ef--5861--9238--a28b124c66ab']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460', 'scsi-SQEMU_QEMU_HARDDISK_5b26d7d0-a0c8-4c7f-bd9d-e63316d26460'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '5b26d7d0', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296447 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-1jNUFK-ju5u-D7ij-Py62-0wVT-eVBU-hKEJvE', 'scsi-0QEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788', 'scsi-SQEMU_QEMU_HARDDISK_328f169c-733e-4f14-823b-87aac3d7f788'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296461 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296492 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.296519 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd82ec97d', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1', 'scsi-SQEMU_QEMU_HARDDISK_d82ec97d-f827-4100-86b5-d0feadaf576d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364586 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364802 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-51-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364815 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE', 'dm-uuid-CRYPT-LUKS2-e40d66eb8e664883be8dd975a39e8f71-3olXVi-RUdT-a1K1-tyYa-VS99-W21l-GqyhJE'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364827 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI', 'dm-uuid-CRYPT-LUKS2-b79d0c525d1a4583b35f4aeb5a2ac24e-8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364932 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364945 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--43dba57c--3e97--52bb--978e--0b7bf56fe0c6-osd--block--43dba57c--3e97--52bb--978e--0b7bf56fe0c6', 'dm-uuid-LVM-smkv35UmDioSyiKczhjvHmfqXmqpX7QT8MWiF1jmxyBB14hpOPcESPktQ6Pbw4WI'], 'uuids': ['b79d0c52-5d1a-4583-b35f-4aeb5a2ac24e'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '328f169c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MWiF1-jmxy-BB14-hpOP-cESP-ktQ6-Pbw4WI']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-6g4jq1-0RJN-2V5m-4iLs-xOZr-EnEV-0z42fM', 'scsi-0QEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52', 'scsi-SQEMU_QEMU_HARDDISK_848b7966-1abc-45c8-bb4e-7a18a2718e52'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '848b7966', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5ce47f09--4cf3--58ef--8e90--2b997425535f-osd--block--5ce47f09--4cf3--58ef--8e90--2b997425535f']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364978 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.364990 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.365015 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e6ae2313', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1', 'scsi-SQEMU_QEMU_HARDDISK_e6ae2313-edff-4f38-a15e-e73833441a0d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638564 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1', 'dm-uuid-LVM-RKsGyEe6XXFp06rqxLIXGVK0DxbU0GWh40QmdxhJXhUwOk2tHWKnT9i9j7e2AfAw'], 'uuids': ['3a3054ab-e73d-4dec-b96d-e7c980380425'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638804 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d', 'scsi-SQEMU_QEMU_HARDDISK_53853b9a-f5c7-4285-928f-a8aa60d7202d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '53853b9a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638826 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g', 'dm-uuid-CRYPT-LUKS2-08a6103f7fcb4231b9470f95a49b9065-PZJAxr-orT0-BeTc-QP51-a9Od-G0Vn-k33f2g'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638872 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-39Ra41-aCTS-vi2k-2lif-ZhtI-jPX4-Yda4Fg', 'scsi-0QEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e', 'scsi-SQEMU_QEMU_HARDDISK_e8d0143b-93aa-4fea-9af4-d1456432661e'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-46-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.638992 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:17:54.639012 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.639031 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT', 'dm-uuid-CRYPT-LUKS2-6c8d9b65364e41e0b393c831fad91b63-Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.639063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700066 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:17:54.700171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--8151fb69--3858--5887--af01--e0d44d84b3e6-osd--block--8151fb69--3858--5887--af01--e0d44d84b3e6', 'dm-uuid-LVM-9LyOomemE8dFgmHX9kCkGcu77vJ6QdzmZ9A74lmOVeHsLlc22BADhqJ8uA2fx6vT'], 'uuids': ['6c8d9b65-364e-41e0-b393-c831fad91b63'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'e8d0143b', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Z9A74l-mOVe-HsLl-c22B-ADhq-J8uA-2fx6vT']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700204 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-198k1R-oXI9-ndMQ-UumA-r8dv-vGdj-iXXLN8', 'scsi-0QEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3', 'scsi-SQEMU_QEMU_HARDDISK_a2cf23bc-7fe2-4567-b5c7-4e51efed82f3'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'a2cf23bc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--5f44536a--6e14--5adc--b1bb--0c010a1280f1-osd--block--5f44536a--6e14--5adc--b1bb--0c010a1280f1']}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700216 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700226 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700235 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700258 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700273 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-26-13-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700289 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700298 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700306 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:17:54.700326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'fd8b8514', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1', 'scsi-SQEMU_QEMU_HARDDISK_fd8b8514-7874-426e-a54e-5d908caa4a6d-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975307 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '91f5b10e', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1', 'scsi-SQEMU_QEMU_HARDDISK_91f5b10e-f3e3-4ebd-b719-1fd016e5b677-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975447 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975491 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975559 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975581 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975600 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:09.975622 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw', 'dm-uuid-CRYPT-LUKS2-3a3054abe73d4decb96de7c980380425-40Qmdx-hJXh-UwOk-2tHW-KnT9-i9j7-e2AfAw'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:18:09.975640 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:09.975658 | orchestrator | 2026-02-13 06:18:09.975676 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 06:18:09.975786 | orchestrator | Friday 13 February 2026 06:17:56 +0000 (0:00:02.901) 0:01:56.790 ******* 2026-02-13 06:18:09.975813 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:18:09.975831 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:18:09.975847 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:18:09.975864 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:18:09.975881 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:18:09.975898 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:18:09.975915 | orchestrator | ok: [testbed-manager] 2026-02-13 06:18:09.975932 | orchestrator | 2026-02-13 06:18:09.975948 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 06:18:09.975968 | orchestrator | Friday 13 February 2026 06:17:58 +0000 (0:00:02.690) 0:01:59.481 ******* 2026-02-13 06:18:09.975985 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:18:09.976002 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:18:09.976019 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:18:09.976036 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:18:09.976053 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:18:09.976070 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:18:09.976087 | orchestrator | ok: [testbed-manager] 2026-02-13 06:18:09.976104 | orchestrator | 2026-02-13 06:18:09.976121 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 06:18:09.976138 | orchestrator | Friday 13 February 2026 06:18:00 +0000 (0:00:02.020) 0:02:01.501 ******* 2026-02-13 06:18:09.976155 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:18:09.976171 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:18:09.976189 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:18:09.976205 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:18:09.976241 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:09.976258 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:18:09.976275 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:18:09.976286 | orchestrator | 2026-02-13 06:18:09.976296 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 06:18:09.976305 | orchestrator | Friday 13 February 2026 06:18:03 +0000 (0:00:02.385) 0:02:03.886 ******* 2026-02-13 06:18:09.976315 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:18:09.976324 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:18:09.976334 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:18:09.976343 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:09.976354 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:09.976371 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:09.976387 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:09.976404 | orchestrator | 2026-02-13 06:18:09.976421 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 06:18:09.976435 | orchestrator | Friday 13 February 2026 06:18:05 +0000 (0:00:01.990) 0:02:05.877 ******* 2026-02-13 06:18:09.976445 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:18:09.976455 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:18:09.976475 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:18:09.976485 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:09.976494 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:09.976504 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:09.976514 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-13 06:18:09.976523 | orchestrator | 2026-02-13 06:18:09.976533 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 06:18:09.976543 | orchestrator | Friday 13 February 2026 06:18:07 +0000 (0:00:02.674) 0:02:08.551 ******* 2026-02-13 06:18:09.976552 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:18:09.976562 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:18:09.976572 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:18:09.976581 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:09.976591 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:09.976601 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:09.976610 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:09.976620 | orchestrator | 2026-02-13 06:18:09.976630 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 06:18:46.791480 | orchestrator | Friday 13 February 2026 06:18:09 +0000 (0:00:02.104) 0:02:10.656 ******* 2026-02-13 06:18:46.791619 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:18:46.791645 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 06:18:46.791657 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-13 06:18:46.791669 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-13 06:18:46.791680 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-13 06:18:46.791745 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 06:18:46.791758 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-13 06:18:46.791770 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-13 06:18:46.791781 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-13 06:18:46.791792 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-13 06:18:46.791802 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-13 06:18:46.791813 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-13 06:18:46.791824 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-13 06:18:46.791835 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-13 06:18:46.791846 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-13 06:18:46.791856 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-13 06:18:46.791867 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-13 06:18:46.791904 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-13 06:18:46.791915 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-13 06:18:46.791926 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-13 06:18:46.791936 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-13 06:18:46.791947 | orchestrator | 2026-02-13 06:18:46.791959 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 06:18:46.791970 | orchestrator | Friday 13 February 2026 06:18:12 +0000 (0:00:02.881) 0:02:13.538 ******* 2026-02-13 06:18:46.791982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 06:18:46.791993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 06:18:46.792003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 06:18:46.792015 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:18:46.792026 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-13 06:18:46.792037 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-13 06:18:46.792048 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-13 06:18:46.792059 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:18:46.792069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-13 06:18:46.792080 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-13 06:18:46.792090 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-13 06:18:46.792101 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:18:46.792112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-13 06:18:46.792123 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-13 06:18:46.792133 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-13 06:18:46.792144 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792155 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-13 06:18:46.792165 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-13 06:18:46.792176 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-13 06:18:46.792186 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:46.792197 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-13 06:18:46.792208 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-13 06:18:46.792218 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-13 06:18:46.792229 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:46.792240 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-13 06:18:46.792250 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-13 06:18:46.792261 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-13 06:18:46.792272 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:46.792282 | orchestrator | 2026-02-13 06:18:46.792293 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 06:18:46.792304 | orchestrator | Friday 13 February 2026 06:18:15 +0000 (0:00:02.188) 0:02:15.726 ******* 2026-02-13 06:18:46.792315 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:18:46.792325 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:18:46.792336 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:18:46.792346 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:18:46.792373 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 06:18:46.792385 | orchestrator | 2026-02-13 06:18:46.792396 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 06:18:46.792408 | orchestrator | Friday 13 February 2026 06:18:16 +0000 (0:00:01.918) 0:02:17.645 ******* 2026-02-13 06:18:46.792419 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792430 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:46.792448 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:46.792459 | orchestrator | 2026-02-13 06:18:46.792470 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 06:18:46.792480 | orchestrator | Friday 13 February 2026 06:18:18 +0000 (0:00:01.604) 0:02:19.249 ******* 2026-02-13 06:18:46.792491 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792502 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:46.792531 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:46.792542 | orchestrator | 2026-02-13 06:18:46.792553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 06:18:46.792564 | orchestrator | Friday 13 February 2026 06:18:19 +0000 (0:00:01.407) 0:02:20.657 ******* 2026-02-13 06:18:46.792575 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792586 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:18:46.792596 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:18:46.792607 | orchestrator | 2026-02-13 06:18:46.792618 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 06:18:46.792629 | orchestrator | Friday 13 February 2026 06:18:21 +0000 (0:00:01.365) 0:02:22.022 ******* 2026-02-13 06:18:46.792640 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:18:46.792651 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:18:46.792661 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:18:46.792672 | orchestrator | 2026-02-13 06:18:46.792683 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 06:18:46.792755 | orchestrator | Friday 13 February 2026 06:18:22 +0000 (0:00:01.492) 0:02:23.514 ******* 2026-02-13 06:18:46.792767 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 06:18:46.792778 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 06:18:46.792789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 06:18:46.792800 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792810 | orchestrator | 2026-02-13 06:18:46.792821 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 06:18:46.792832 | orchestrator | Friday 13 February 2026 06:18:24 +0000 (0:00:01.692) 0:02:25.207 ******* 2026-02-13 06:18:46.792842 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 06:18:46.792853 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 06:18:46.792864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 06:18:46.792874 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792885 | orchestrator | 2026-02-13 06:18:46.792896 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 06:18:46.792907 | orchestrator | Friday 13 February 2026 06:18:26 +0000 (0:00:01.745) 0:02:26.953 ******* 2026-02-13 06:18:46.792917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-13 06:18:46.792928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-13 06:18:46.792939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-13 06:18:46.792949 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:18:46.792960 | orchestrator | 2026-02-13 06:18:46.792971 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 06:18:46.792981 | orchestrator | Friday 13 February 2026 06:18:28 +0000 (0:00:01.809) 0:02:28.763 ******* 2026-02-13 06:18:46.792992 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:18:46.793003 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:18:46.793013 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:18:46.793024 | orchestrator | 2026-02-13 06:18:46.793035 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 06:18:46.793046 | orchestrator | Friday 13 February 2026 06:18:29 +0000 (0:00:01.421) 0:02:30.184 ******* 2026-02-13 06:18:46.793057 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-13 06:18:46.793068 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-13 06:18:46.793078 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-13 06:18:46.793097 | orchestrator | 2026-02-13 06:18:46.793108 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 06:18:46.793119 | orchestrator | Friday 13 February 2026 06:18:31 +0000 (0:00:01.551) 0:02:31.736 ******* 2026-02-13 06:18:46.793129 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:18:46.793140 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:18:46.793152 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:18:46.793163 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 06:18:46.793173 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 06:18:46.793184 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 06:18:46.793195 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 06:18:46.793206 | orchestrator | 2026-02-13 06:18:46.793216 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 06:18:46.793227 | orchestrator | Friday 13 February 2026 06:18:33 +0000 (0:00:02.019) 0:02:33.755 ******* 2026-02-13 06:18:46.793238 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:18:46.793248 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:18:46.793265 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:18:46.793276 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 06:18:46.793287 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 06:18:46.793307 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 06:18:46.793325 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 06:18:46.793342 | orchestrator | 2026-02-13 06:18:46.793361 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-13 06:18:46.793383 | orchestrator | Friday 13 February 2026 06:18:35 +0000 (0:00:02.911) 0:02:36.666 ******* 2026-02-13 06:18:46.793401 | orchestrator | changed: [testbed-node-3] 2026-02-13 06:18:46.793421 | orchestrator | changed: [testbed-node-4] 2026-02-13 06:18:46.793440 | orchestrator | changed: [testbed-manager] 2026-02-13 06:18:46.793470 | orchestrator | changed: [testbed-node-5] 2026-02-13 06:19:22.768382 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:19:22.768529 | orchestrator | changed: [testbed-node-1] 2026-02-13 06:19:22.768553 | orchestrator | changed: [testbed-node-2] 2026-02-13 06:19:22.768572 | orchestrator | 2026-02-13 06:19:22.768592 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-13 06:19:22.768610 | orchestrator | Friday 13 February 2026 06:18:46 +0000 (0:00:10.807) 0:02:47.474 ******* 2026-02-13 06:19:22.768630 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.768648 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.768665 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.768684 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.768764 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.768783 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.768801 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.768818 | orchestrator | 2026-02-13 06:19:22.768836 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-13 06:19:22.768855 | orchestrator | Friday 13 February 2026 06:18:48 +0000 (0:00:02.022) 0:02:49.497 ******* 2026-02-13 06:19:22.768873 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.768891 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.768912 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.768934 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.768988 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.769009 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.769028 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769047 | orchestrator | 2026-02-13 06:19:22.769066 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-13 06:19:22.769086 | orchestrator | Friday 13 February 2026 06:18:50 +0000 (0:00:01.920) 0:02:51.417 ******* 2026-02-13 06:19:22.769104 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769124 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:19:22.769146 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:19:22.769166 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:19:22.769187 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:19:22.769206 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:19:22.769226 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:19:22.769244 | orchestrator | 2026-02-13 06:19:22.769262 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-13 06:19:22.769280 | orchestrator | Friday 13 February 2026 06:18:53 +0000 (0:00:03.050) 0:02:54.468 ******* 2026-02-13 06:19:22.769300 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 06:19:22.769320 | orchestrator | 2026-02-13 06:19:22.769338 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-13 06:19:22.769355 | orchestrator | Friday 13 February 2026 06:18:56 +0000 (0:00:02.990) 0:02:57.458 ******* 2026-02-13 06:19:22.769370 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.769387 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.769435 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.769455 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.769473 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.769489 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.769500 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769511 | orchestrator | 2026-02-13 06:19:22.769522 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-13 06:19:22.769533 | orchestrator | Friday 13 February 2026 06:18:58 +0000 (0:00:01.931) 0:02:59.390 ******* 2026-02-13 06:19:22.769544 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.769556 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.769567 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.769577 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.769588 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.769599 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.769610 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769620 | orchestrator | 2026-02-13 06:19:22.769631 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-13 06:19:22.769642 | orchestrator | Friday 13 February 2026 06:19:01 +0000 (0:00:02.513) 0:03:01.904 ******* 2026-02-13 06:19:22.769653 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.769664 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.769674 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.769747 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.769763 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.769774 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.769784 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769795 | orchestrator | 2026-02-13 06:19:22.769806 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-13 06:19:22.769817 | orchestrator | Friday 13 February 2026 06:19:03 +0000 (0:00:02.076) 0:03:03.980 ******* 2026-02-13 06:19:22.769828 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.769838 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.769849 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.769859 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.769870 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.769909 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.769921 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.769931 | orchestrator | 2026-02-13 06:19:22.769942 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-13 06:19:22.769953 | orchestrator | Friday 13 February 2026 06:19:05 +0000 (0:00:02.121) 0:03:06.102 ******* 2026-02-13 06:19:22.769964 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.769974 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.769985 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.769995 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770006 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770091 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770104 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770115 | orchestrator | 2026-02-13 06:19:22.770127 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-13 06:19:22.770139 | orchestrator | Friday 13 February 2026 06:19:07 +0000 (0:00:01.892) 0:03:07.994 ******* 2026-02-13 06:19:22.770150 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770196 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770208 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770219 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770230 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770240 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770251 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770262 | orchestrator | 2026-02-13 06:19:22.770273 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-13 06:19:22.770284 | orchestrator | Friday 13 February 2026 06:19:09 +0000 (0:00:02.141) 0:03:10.136 ******* 2026-02-13 06:19:22.770295 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770305 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770316 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770327 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770338 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770348 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770359 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770369 | orchestrator | 2026-02-13 06:19:22.770380 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-13 06:19:22.770391 | orchestrator | Friday 13 February 2026 06:19:11 +0000 (0:00:02.099) 0:03:12.236 ******* 2026-02-13 06:19:22.770402 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770413 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770423 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770434 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770444 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770455 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770465 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770476 | orchestrator | 2026-02-13 06:19:22.770487 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-13 06:19:22.770498 | orchestrator | Friday 13 February 2026 06:19:13 +0000 (0:00:01.960) 0:03:14.196 ******* 2026-02-13 06:19:22.770509 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770519 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770530 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770540 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770551 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770562 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770572 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770583 | orchestrator | 2026-02-13 06:19:22.770594 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-13 06:19:22.770605 | orchestrator | Friday 13 February 2026 06:19:15 +0000 (0:00:02.256) 0:03:16.453 ******* 2026-02-13 06:19:22.770615 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770634 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770645 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770656 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770666 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770678 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770752 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770773 | orchestrator | 2026-02-13 06:19:22.770790 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-13 06:19:22.770802 | orchestrator | Friday 13 February 2026 06:19:17 +0000 (0:00:01.834) 0:03:18.287 ******* 2026-02-13 06:19:22.770813 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770823 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770834 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770845 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770856 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770866 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.770877 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.770888 | orchestrator | 2026-02-13 06:19:22.770899 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-13 06:19:22.770910 | orchestrator | Friday 13 February 2026 06:19:19 +0000 (0:00:02.289) 0:03:20.576 ******* 2026-02-13 06:19:22.770921 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.770932 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.770942 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.770953 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.770966 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.770985 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:22.771010 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:22.771033 | orchestrator | 2026-02-13 06:19:22.771051 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-13 06:19:22.771068 | orchestrator | Friday 13 February 2026 06:19:21 +0000 (0:00:01.928) 0:03:22.505 ******* 2026-02-13 06:19:22.771086 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:22.771103 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:22.771120 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:22.771140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 06:19:22.771169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 06:19:22.771189 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:22.771208 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 06:19:22.771227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 06:19:22.771240 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:22.771251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 06:19:22.771274 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 06:19:51.166859 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.166960 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.166974 | orchestrator | 2026-02-13 06:19:51.166985 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-13 06:19:51.166995 | orchestrator | Friday 13 February 2026 06:19:23 +0000 (0:00:02.053) 0:03:24.559 ******* 2026-02-13 06:19:51.167004 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167014 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167041 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167050 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167058 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167066 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167075 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167083 | orchestrator | 2026-02-13 06:19:51.167091 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-13 06:19:51.167099 | orchestrator | Friday 13 February 2026 06:19:25 +0000 (0:00:02.030) 0:03:26.590 ******* 2026-02-13 06:19:51.167107 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167116 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167124 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167132 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167140 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167148 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167156 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167164 | orchestrator | 2026-02-13 06:19:51.167173 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-13 06:19:51.167181 | orchestrator | Friday 13 February 2026 06:19:28 +0000 (0:00:02.199) 0:03:28.790 ******* 2026-02-13 06:19:51.167189 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167197 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167205 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167213 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167221 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167229 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167237 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167245 | orchestrator | 2026-02-13 06:19:51.167253 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-13 06:19:51.167261 | orchestrator | Friday 13 February 2026 06:19:30 +0000 (0:00:02.148) 0:03:30.938 ******* 2026-02-13 06:19:51.167269 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167277 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167285 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167293 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167301 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167309 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167317 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167325 | orchestrator | 2026-02-13 06:19:51.167333 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-13 06:19:51.167342 | orchestrator | Friday 13 February 2026 06:19:32 +0000 (0:00:01.943) 0:03:32.881 ******* 2026-02-13 06:19:51.167350 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167358 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167366 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167374 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167382 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167391 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167399 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167408 | orchestrator | 2026-02-13 06:19:51.167418 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-13 06:19:51.167428 | orchestrator | Friday 13 February 2026 06:19:34 +0000 (0:00:02.085) 0:03:34.967 ******* 2026-02-13 06:19:51.167437 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167447 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167456 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167465 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167475 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167484 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167493 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167502 | orchestrator | 2026-02-13 06:19:51.167512 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-13 06:19:51.167522 | orchestrator | Friday 13 February 2026 06:19:36 +0000 (0:00:01.952) 0:03:36.919 ******* 2026-02-13 06:19:51.167538 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:19:51.167548 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:19:51.167557 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:19:51.167566 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:19:51.167576 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 06:19:51.167585 | orchestrator | 2026-02-13 06:19:51.167595 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-13 06:19:51.167604 | orchestrator | Friday 13 February 2026 06:19:38 +0000 (0:00:02.780) 0:03:39.700 ******* 2026-02-13 06:19:51.167613 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:19:51.167636 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:19:51.167646 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:19:51.167655 | orchestrator | 2026-02-13 06:19:51.167664 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-13 06:19:51.167674 | orchestrator | Friday 13 February 2026 06:19:40 +0000 (0:00:01.374) 0:03:41.074 ******* 2026-02-13 06:19:51.167721 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 06:19:51.167733 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 06:19:51.167742 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 06:19:51.167775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 06:19:51.167784 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167792 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 06:19:51.167800 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 06:19:51.167808 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167816 | orchestrator | 2026-02-13 06:19:51.167824 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-13 06:19:51.167833 | orchestrator | Friday 13 February 2026 06:19:41 +0000 (0:00:01.360) 0:03:42.435 ******* 2026-02-13 06:19:51.167842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167853 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167861 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167869 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167877 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167894 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167902 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167911 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:51.167919 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167927 | orchestrator | 2026-02-13 06:19:51.167936 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-13 06:19:51.167944 | orchestrator | Friday 13 February 2026 06:19:43 +0000 (0:00:01.747) 0:03:44.183 ******* 2026-02-13 06:19:51.167952 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.167960 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.167968 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.167976 | orchestrator | 2026-02-13 06:19:51.167984 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-13 06:19:51.167992 | orchestrator | Friday 13 February 2026 06:19:44 +0000 (0:00:01.372) 0:03:45.555 ******* 2026-02-13 06:19:51.168000 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.168008 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.168020 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.168029 | orchestrator | 2026-02-13 06:19:51.168037 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-13 06:19:51.168045 | orchestrator | Friday 13 February 2026 06:19:46 +0000 (0:00:01.416) 0:03:46.972 ******* 2026-02-13 06:19:51.168053 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.168061 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.168069 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.168077 | orchestrator | 2026-02-13 06:19:51.168085 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-13 06:19:51.168093 | orchestrator | Friday 13 February 2026 06:19:47 +0000 (0:00:01.323) 0:03:48.295 ******* 2026-02-13 06:19:51.168101 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:51.168109 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:51.168117 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:19:51.168125 | orchestrator | 2026-02-13 06:19:51.168133 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-13 06:19:51.168141 | orchestrator | Friday 13 February 2026 06:19:48 +0000 (0:00:01.293) 0:03:49.589 ******* 2026-02-13 06:19:51.168154 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}) 2026-02-13 06:19:52.529424 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}) 2026-02-13 06:19:52.529496 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}) 2026-02-13 06:19:52.529506 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}) 2026-02-13 06:19:52.529514 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}) 2026-02-13 06:19:52.529522 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}) 2026-02-13 06:19:52.529552 | orchestrator | 2026-02-13 06:19:52.529563 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-13 06:19:52.529571 | orchestrator | Friday 13 February 2026 06:19:51 +0000 (0:00:02.255) 0:03:51.845 ******* 2026-02-13 06:19:52.529584 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f/osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770953972.7269742, 'mtime': 1770953972.7209742, 'ctime': 1770953972.7209742, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f/osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:52.529609 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-7c5ad083-16ef-5861-9238-a28b124c66ab/osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770953991.2062955, 'mtime': 1770953991.2022955, 'ctime': 1770953991.2022955, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-7c5ad083-16ef-5861-9238-a28b124c66ab/osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:52.529616 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:19:52.529636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6/osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1770953968.3952706, 'mtime': 1770953968.3882704, 'ctime': 1770953968.3882704, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6/osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:52.529648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f/osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1770953987.1075842, 'mtime': 1770953987.1015842, 'ctime': 1770953987.1015842, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f/osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:52.529653 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:19:52.529661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-8151fb69-3858-5887-af01-e0d44d84b3e6/osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 954, 'dev': 6, 'nlink': 1, 'atime': 1770953970.9330325, 'mtime': 1770953970.9280324, 'ctime': 1770953970.9280324, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-8151fb69-3858-5887-af01-e0d44d84b3e6/osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:19:52.529675 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1/osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 964, 'dev': 6, 'nlink': 1, 'atime': 1770953989.4863594, 'mtime': 1770953989.4793594, 'ctime': 1770953989.4793594, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1/osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272463 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.272565 | orchestrator | 2026-02-13 06:20:03.272579 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-13 06:20:03.272590 | orchestrator | Friday 13 February 2026 06:19:52 +0000 (0:00:01.377) 0:03:53.222 ******* 2026-02-13 06:20:03.272600 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 06:20:03.272611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 06:20:03.272620 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:03.272632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 06:20:03.272648 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 06:20:03.272662 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:03.272676 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 06:20:03.272751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 06:20:03.272766 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.272781 | orchestrator | 2026-02-13 06:20:03.272796 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-13 06:20:03.272812 | orchestrator | Friday 13 February 2026 06:19:53 +0000 (0:00:01.357) 0:03:54.580 ******* 2026-02-13 06:20:03.272831 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272849 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272864 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:03.272894 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272913 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:03.272922 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272949 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.272958 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.272967 | orchestrator | 2026-02-13 06:20:03.272976 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-13 06:20:03.272984 | orchestrator | Friday 13 February 2026 06:19:55 +0000 (0:00:01.577) 0:03:56.157 ******* 2026-02-13 06:20:03.272993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'})  2026-02-13 06:20:03.273002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'})  2026-02-13 06:20:03.273011 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:03.273036 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'})  2026-02-13 06:20:03.273047 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'})  2026-02-13 06:20:03.273057 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:03.273067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'})  2026-02-13 06:20:03.273077 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'})  2026-02-13 06:20:03.273088 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.273097 | orchestrator | 2026-02-13 06:20:03.273108 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-13 06:20:03.273118 | orchestrator | Friday 13 February 2026 06:19:57 +0000 (0:00:01.685) 0:03:57.843 ******* 2026-02-13 06:20:03.273129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-90d7f9ba-9289-5e80-9038-1ad4979f4e3f', 'data_vg': 'ceph-90d7f9ba-9289-5e80-9038-1ad4979f4e3f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273140 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-7c5ad083-16ef-5861-9238-a28b124c66ab', 'data_vg': 'ceph-7c5ad083-16ef-5861-9238-a28b124c66ab'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273150 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:03.273160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-43dba57c-3e97-52bb-978e-0b7bf56fe0c6', 'data_vg': 'ceph-43dba57c-3e97-52bb-978e-0b7bf56fe0c6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273171 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5ce47f09-4cf3-58ef-8e90-2b997425535f', 'data_vg': 'ceph-5ce47f09-4cf3-58ef-8e90-2b997425535f'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273181 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:03.273196 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-8151fb69-3858-5887-af01-e0d44d84b3e6', 'data_vg': 'ceph-8151fb69-3858-5887-af01-e0d44d84b3e6'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273213 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-5f44536a-6e14-5adc-b1bb-0c010a1280f1', 'data_vg': 'ceph-5f44536a-6e14-5adc-b1bb-0c010a1280f1'}, 'ansible_loop_var': 'item'})  2026-02-13 06:20:03.273223 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.273233 | orchestrator | 2026-02-13 06:20:03.273244 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-13 06:20:03.273254 | orchestrator | Friday 13 February 2026 06:19:58 +0000 (0:00:01.411) 0:03:59.255 ******* 2026-02-13 06:20:03.273264 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:03.273274 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:03.273284 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:03.273295 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:03.273308 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:03.273323 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:03.273338 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:03.273353 | orchestrator | 2026-02-13 06:20:03.273368 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-13 06:20:03.273383 | orchestrator | Friday 13 February 2026 06:20:00 +0000 (0:00:02.039) 0:04:01.295 ******* 2026-02-13 06:20:03.273397 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:03.273411 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:03.273425 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:03.273438 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:03.273453 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-13 06:20:03.273468 | orchestrator | 2026-02-13 06:20:03.273483 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-13 06:20:03.273498 | orchestrator | Friday 13 February 2026 06:20:03 +0000 (0:00:02.542) 0:04:03.838 ******* 2026-02-13 06:20:03.273524 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333496 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333522 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333543 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333561 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333586 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333591 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333596 | orchestrator | 2026-02-13 06:20:14.333601 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-13 06:20:14.333607 | orchestrator | Friday 13 February 2026 06:20:04 +0000 (0:00:01.524) 0:04:05.362 ******* 2026-02-13 06:20:14.333612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333626 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333631 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333635 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333644 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333663 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333672 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333711 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333732 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333737 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333742 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333746 | orchestrator | 2026-02-13 06:20:14.333751 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-13 06:20:14.333756 | orchestrator | Friday 13 February 2026 06:20:06 +0000 (0:00:01.909) 0:04:07.271 ******* 2026-02-13 06:20:14.333765 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333779 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333788 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333798 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333816 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333821 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333830 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-13 06:20:14.333847 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333852 | orchestrator | 2026-02-13 06:20:14.333856 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-13 06:20:14.333861 | orchestrator | Friday 13 February 2026 06:20:08 +0000 (0:00:01.439) 0:04:08.711 ******* 2026-02-13 06:20:14.333866 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:14.333870 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:14.333875 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:14.333879 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333884 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333888 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333893 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:14.333897 | orchestrator | 2026-02-13 06:20:14.333902 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-13 06:20:14.333906 | orchestrator | Friday 13 February 2026 06:20:09 +0000 (0:00:01.956) 0:04:10.667 ******* 2026-02-13 06:20:14.333911 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:14.333916 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:14.333920 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:14.333925 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333929 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333937 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333942 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:14.333946 | orchestrator | 2026-02-13 06:20:14.333951 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-13 06:20:14.333956 | orchestrator | Friday 13 February 2026 06:20:12 +0000 (0:00:02.121) 0:04:12.789 ******* 2026-02-13 06:20:14.333960 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:14.333965 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:14.333969 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:14.333975 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:14.333980 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:14.333985 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:14.333990 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:14.333995 | orchestrator | 2026-02-13 06:20:14.334000 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-13 06:20:14.334005 | orchestrator | Friday 13 February 2026 06:20:14 +0000 (0:00:02.005) 0:04:14.794 ******* 2026-02-13 06:20:14.334046 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:24.272037 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:24.272144 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:24.272159 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:24.272170 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:24.272182 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:24.272193 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:24.272204 | orchestrator | 2026-02-13 06:20:24.272217 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-13 06:20:24.272230 | orchestrator | Friday 13 February 2026 06:20:16 +0000 (0:00:01.943) 0:04:16.738 ******* 2026-02-13 06:20:24.272241 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:24.272252 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:24.272263 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:24.272274 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:24.272284 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:24.272294 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:24.272305 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:24.272315 | orchestrator | 2026-02-13 06:20:24.272325 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-13 06:20:24.272337 | orchestrator | Friday 13 February 2026 06:20:18 +0000 (0:00:02.042) 0:04:18.780 ******* 2026-02-13 06:20:24.272348 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:24.272358 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:24.272368 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:24.272378 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:24.272389 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:24.272399 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:24.272410 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:24.272420 | orchestrator | 2026-02-13 06:20:24.272431 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-13 06:20:24.272441 | orchestrator | Friday 13 February 2026 06:20:19 +0000 (0:00:01.887) 0:04:20.668 ******* 2026-02-13 06:20:24.272452 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:24.272462 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:24.272473 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:24.272484 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:24.272495 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:24.272505 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:24.272515 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:24.272526 | orchestrator | 2026-02-13 06:20:24.272537 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-13 06:20:24.272547 | orchestrator | Friday 13 February 2026 06:20:22 +0000 (0:00:02.128) 0:04:22.797 ******* 2026-02-13 06:20:24.272559 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272594 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272607 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.272634 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:24.272646 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:24.272659 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:24.272670 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:24.272704 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272715 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272726 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.272737 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:24.272748 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:24.272757 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:24.272763 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:24.272786 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272797 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272808 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.272818 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:24.272829 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:24.272840 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:24.272851 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:24.272862 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272875 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272882 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.272891 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:24.272902 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272912 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:24.272929 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272940 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:24.272951 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.272962 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.272971 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:24.272978 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:24.272984 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:24.272993 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:24.273003 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:24.273014 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:24.273025 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:24.273036 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:24.273053 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953050 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953159 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953175 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953209 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953221 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953232 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:28.953244 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953254 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953264 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:28.953274 | orchestrator | 2026-02-13 06:20:28.953285 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-13 06:20:28.953296 | orchestrator | Friday 13 February 2026 06:20:24 +0000 (0:00:02.164) 0:04:24.962 ******* 2026-02-13 06:20:28.953306 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:28.953316 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:28.953325 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:28.953335 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:28.953344 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:20:28.953354 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:20:28.953363 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:20:28.953373 | orchestrator | 2026-02-13 06:20:28.953383 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-13 06:20:28.953392 | orchestrator | Friday 13 February 2026 06:20:26 +0000 (0:00:02.465) 0:04:27.428 ******* 2026-02-13 06:20:28.953402 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953430 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953447 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953460 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953473 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953486 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953498 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:20:28.953511 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953522 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953535 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953558 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953592 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953607 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953621 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:20:28.953631 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953640 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953650 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953659 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953668 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953703 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953715 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:20:28.953724 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953733 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953742 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953751 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953767 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:20:28.953777 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:20:28.953786 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953795 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953805 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953814 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:20:28.953829 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:20:28.953839 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953848 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:20:28.953857 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:20:28.953866 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-13 06:20:28.953883 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-13 06:21:09.714395 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-13 06:21:09.714552 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:21:09.714579 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:21:09.714601 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:21:09.714620 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.714639 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-13 06:21:09.714658 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:21:09.714670 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-13 06:21:09.714715 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:21:09.714726 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-13 06:21:09.714736 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.714746 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.714756 | orchestrator | 2026-02-13 06:21:09.714768 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-13 06:21:09.714779 | orchestrator | Friday 13 February 2026 06:20:28 +0000 (0:00:02.208) 0:04:29.636 ******* 2026-02-13 06:21:09.714788 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.714798 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.714808 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.714833 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.714844 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.714853 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.714884 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.714895 | orchestrator | 2026-02-13 06:21:09.714906 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-13 06:21:09.714917 | orchestrator | Friday 13 February 2026 06:20:31 +0000 (0:00:02.187) 0:04:31.824 ******* 2026-02-13 06:21:09.714929 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.714939 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.714950 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.714961 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.714972 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.714983 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.714995 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715005 | orchestrator | 2026-02-13 06:21:09.715017 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-13 06:21:09.715029 | orchestrator | Friday 13 February 2026 06:20:33 +0000 (0:00:02.203) 0:04:34.027 ******* 2026-02-13 06:21:09.715041 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715052 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.715063 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.715074 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.715085 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.715096 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.715108 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715118 | orchestrator | 2026-02-13 06:21:09.715130 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-13 06:21:09.715141 | orchestrator | Friday 13 February 2026 06:20:35 +0000 (0:00:02.340) 0:04:36.368 ******* 2026-02-13 06:21:09.715152 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-13 06:21:09.715165 | orchestrator | 2026-02-13 06:21:09.715177 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-13 06:21:09.715188 | orchestrator | Friday 13 February 2026 06:20:38 +0000 (0:00:02.678) 0:04:39.046 ******* 2026-02-13 06:21:09.715200 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715212 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715223 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715235 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715265 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715277 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715289 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-13 06:21:09.715299 | orchestrator | 2026-02-13 06:21:09.715309 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-13 06:21:09.715318 | orchestrator | Friday 13 February 2026 06:20:40 +0000 (0:00:02.151) 0:04:41.198 ******* 2026-02-13 06:21:09.715328 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715337 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.715347 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.715356 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.715366 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.715375 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.715385 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715394 | orchestrator | 2026-02-13 06:21:09.715404 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-13 06:21:09.715413 | orchestrator | Friday 13 February 2026 06:20:42 +0000 (0:00:02.178) 0:04:43.377 ******* 2026-02-13 06:21:09.715430 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715440 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.715449 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.715459 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.715468 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.715478 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.715487 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715497 | orchestrator | 2026-02-13 06:21:09.715507 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-13 06:21:09.715516 | orchestrator | Friday 13 February 2026 06:20:44 +0000 (0:00:02.234) 0:04:45.611 ******* 2026-02-13 06:21:09.715526 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:09.715536 | orchestrator | ok: [testbed-node-1] 2026-02-13 06:21:09.715545 | orchestrator | ok: [testbed-node-2] 2026-02-13 06:21:09.715554 | orchestrator | ok: [testbed-node-3] 2026-02-13 06:21:09.715564 | orchestrator | ok: [testbed-node-4] 2026-02-13 06:21:09.715573 | orchestrator | ok: [testbed-node-5] 2026-02-13 06:21:09.715583 | orchestrator | ok: [testbed-manager] 2026-02-13 06:21:09.715592 | orchestrator | 2026-02-13 06:21:09.715602 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-13 06:21:09.715611 | orchestrator | Friday 13 February 2026 06:20:47 +0000 (0:00:02.288) 0:04:47.899 ******* 2026-02-13 06:21:09.715621 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715630 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.715640 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.715649 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.715659 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.715668 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.715699 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715710 | orchestrator | 2026-02-13 06:21:09.715720 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-13 06:21:09.715729 | orchestrator | Friday 13 February 2026 06:20:49 +0000 (0:00:02.297) 0:04:50.197 ******* 2026-02-13 06:21:09.715739 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715754 | orchestrator | skipping: [testbed-node-1] 2026-02-13 06:21:09.715764 | orchestrator | skipping: [testbed-node-2] 2026-02-13 06:21:09.715774 | orchestrator | skipping: [testbed-node-3] 2026-02-13 06:21:09.715783 | orchestrator | skipping: [testbed-node-4] 2026-02-13 06:21:09.715793 | orchestrator | skipping: [testbed-node-5] 2026-02-13 06:21:09.715802 | orchestrator | skipping: [testbed-manager] 2026-02-13 06:21:09.715812 | orchestrator | 2026-02-13 06:21:09.715822 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-13 06:21:09.715832 | orchestrator | Friday 13 February 2026 06:20:52 +0000 (0:00:02.533) 0:04:52.730 ******* 2026-02-13 06:21:09.715842 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:09.715851 | orchestrator | 2026-02-13 06:21:09.715861 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-13 06:21:09.715871 | orchestrator | Friday 13 February 2026 06:20:54 +0000 (0:00:02.755) 0:04:55.487 ******* 2026-02-13 06:21:09.715881 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:09.715890 | orchestrator | 2026-02-13 06:21:09.715900 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-13 06:21:09.715910 | orchestrator | 2026-02-13 06:21:09.715920 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 06:21:09.715929 | orchestrator | Friday 13 February 2026 06:20:56 +0000 (0:00:01.887) 0:04:57.374 ******* 2026-02-13 06:21:09.715939 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:09.715949 | orchestrator | 2026-02-13 06:21:09.715959 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 06:21:09.715968 | orchestrator | Friday 13 February 2026 06:20:58 +0000 (0:00:01.446) 0:04:58.821 ******* 2026-02-13 06:21:09.715978 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:09.715989 | orchestrator | 2026-02-13 06:21:09.716005 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-13 06:21:09.716030 | orchestrator | Friday 13 February 2026 06:20:59 +0000 (0:00:01.145) 0:04:59.967 ******* 2026-02-13 06:21:09.716049 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-13 06:21:09.716076 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-13 06:21:36.415263 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-13 06:21:36.415382 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-13 06:21:36.415400 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-13 06:21:36.415415 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}])  2026-02-13 06:21:36.415430 | orchestrator | 2026-02-13 06:21:36.415443 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-13 06:21:36.415456 | orchestrator | 2026-02-13 06:21:36.415467 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-13 06:21:36.415479 | orchestrator | Friday 13 February 2026 06:21:09 +0000 (0:00:10.427) 0:05:10.394 ******* 2026-02-13 06:21:36.415491 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415504 | orchestrator | 2026-02-13 06:21:36.415516 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-13 06:21:36.415527 | orchestrator | Friday 13 February 2026 06:21:11 +0000 (0:00:01.515) 0:05:11.909 ******* 2026-02-13 06:21:36.415539 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415551 | orchestrator | 2026-02-13 06:21:36.415579 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-13 06:21:36.415590 | orchestrator | Friday 13 February 2026 06:21:12 +0000 (0:00:01.178) 0:05:13.088 ******* 2026-02-13 06:21:36.415602 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:36.415615 | orchestrator | 2026-02-13 06:21:36.415627 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-13 06:21:36.415638 | orchestrator | Friday 13 February 2026 06:21:13 +0000 (0:00:01.135) 0:05:14.223 ******* 2026-02-13 06:21:36.415650 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415661 | orchestrator | 2026-02-13 06:21:36.415741 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-13 06:21:36.415755 | orchestrator | Friday 13 February 2026 06:21:14 +0000 (0:00:01.138) 0:05:15.362 ******* 2026-02-13 06:21:36.415765 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-13 06:21:36.415777 | orchestrator | 2026-02-13 06:21:36.415788 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-13 06:21:36.415802 | orchestrator | Friday 13 February 2026 06:21:15 +0000 (0:00:01.120) 0:05:16.483 ******* 2026-02-13 06:21:36.415816 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415830 | orchestrator | 2026-02-13 06:21:36.415843 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-13 06:21:36.415856 | orchestrator | Friday 13 February 2026 06:21:17 +0000 (0:00:01.487) 0:05:17.970 ******* 2026-02-13 06:21:36.415869 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415882 | orchestrator | 2026-02-13 06:21:36.415895 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-13 06:21:36.415909 | orchestrator | Friday 13 February 2026 06:21:18 +0000 (0:00:01.205) 0:05:19.176 ******* 2026-02-13 06:21:36.415920 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415932 | orchestrator | 2026-02-13 06:21:36.415943 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-13 06:21:36.415954 | orchestrator | Friday 13 February 2026 06:21:19 +0000 (0:00:01.473) 0:05:20.649 ******* 2026-02-13 06:21:36.415966 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.415977 | orchestrator | 2026-02-13 06:21:36.415989 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-13 06:21:36.416000 | orchestrator | Friday 13 February 2026 06:21:21 +0000 (0:00:01.170) 0:05:21.820 ******* 2026-02-13 06:21:36.416012 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.416023 | orchestrator | 2026-02-13 06:21:36.416034 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-13 06:21:36.416046 | orchestrator | Friday 13 February 2026 06:21:22 +0000 (0:00:01.146) 0:05:22.966 ******* 2026-02-13 06:21:36.416057 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.416069 | orchestrator | 2026-02-13 06:21:36.416080 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-13 06:21:36.416092 | orchestrator | Friday 13 February 2026 06:21:23 +0000 (0:00:01.183) 0:05:24.150 ******* 2026-02-13 06:21:36.416104 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:36.416115 | orchestrator | 2026-02-13 06:21:36.416147 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-13 06:21:36.416159 | orchestrator | Friday 13 February 2026 06:21:24 +0000 (0:00:01.138) 0:05:25.289 ******* 2026-02-13 06:21:36.416171 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.416182 | orchestrator | 2026-02-13 06:21:36.416194 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-13 06:21:36.416206 | orchestrator | Friday 13 February 2026 06:21:25 +0000 (0:00:01.207) 0:05:26.496 ******* 2026-02-13 06:21:36.416218 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:21:36.416230 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:21:36.416241 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:21:36.416253 | orchestrator | 2026-02-13 06:21:36.416264 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-13 06:21:36.416276 | orchestrator | Friday 13 February 2026 06:21:27 +0000 (0:00:01.630) 0:05:28.127 ******* 2026-02-13 06:21:36.416287 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:36.416299 | orchestrator | 2026-02-13 06:21:36.416311 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-13 06:21:36.416322 | orchestrator | Friday 13 February 2026 06:21:28 +0000 (0:00:01.261) 0:05:29.388 ******* 2026-02-13 06:21:36.416334 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:21:36.416346 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:21:36.416366 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:21:36.416377 | orchestrator | 2026-02-13 06:21:36.416389 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-13 06:21:36.416401 | orchestrator | Friday 13 February 2026 06:21:31 +0000 (0:00:03.235) 0:05:32.623 ******* 2026-02-13 06:21:36.416412 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 06:21:36.416424 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 06:21:36.416435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 06:21:36.416447 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:36.416458 | orchestrator | 2026-02-13 06:21:36.416470 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-13 06:21:36.416481 | orchestrator | Friday 13 February 2026 06:21:33 +0000 (0:00:01.414) 0:05:34.037 ******* 2026-02-13 06:21:36.416494 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416514 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416526 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416538 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:36.416550 | orchestrator | 2026-02-13 06:21:36.416562 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-13 06:21:36.416573 | orchestrator | Friday 13 February 2026 06:21:35 +0000 (0:00:01.917) 0:05:35.955 ******* 2026-02-13 06:21:36.416586 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416600 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416623 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:36.416637 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:36.416657 | orchestrator | 2026-02-13 06:21:36.416698 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-13 06:21:36.416729 | orchestrator | Friday 13 February 2026 06:21:36 +0000 (0:00:01.143) 0:05:37.099 ******* 2026-02-13 06:21:55.930898 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '7bdd5a857154', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-13 06:21:29.251015', 'end': '2026-02-13 06:21:29.301734', 'delta': '0:00:00.050719', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['7bdd5a857154'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-13 06:21:55.930992 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'b8f8955ec790', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-13 06:21:29.807371', 'end': '2026-02-13 06:21:29.856623', 'delta': '0:00:00.049252', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b8f8955ec790'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-13 06:21:55.931010 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '30f78d02966b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-13 06:21:30.678842', 'end': '2026-02-13 06:21:30.734187', 'delta': '0:00:00.055345', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['30f78d02966b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-13 06:21:55.931015 | orchestrator | 2026-02-13 06:21:55.931020 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-13 06:21:55.931025 | orchestrator | Friday 13 February 2026 06:21:37 +0000 (0:00:01.223) 0:05:38.322 ******* 2026-02-13 06:21:55.931029 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:55.931034 | orchestrator | 2026-02-13 06:21:55.931038 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-13 06:21:55.931042 | orchestrator | Friday 13 February 2026 06:21:38 +0000 (0:00:01.268) 0:05:39.590 ******* 2026-02-13 06:21:55.931045 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931050 | orchestrator | 2026-02-13 06:21:55.931054 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-13 06:21:55.931058 | orchestrator | Friday 13 February 2026 06:21:40 +0000 (0:00:01.254) 0:05:40.845 ******* 2026-02-13 06:21:55.931062 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:55.931066 | orchestrator | 2026-02-13 06:21:55.931070 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-13 06:21:55.931073 | orchestrator | Friday 13 February 2026 06:21:41 +0000 (0:00:01.136) 0:05:41.981 ******* 2026-02-13 06:21:55.931077 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 06:21:55.931081 | orchestrator | 2026-02-13 06:21:55.931085 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 06:21:55.931089 | orchestrator | Friday 13 February 2026 06:21:43 +0000 (0:00:02.005) 0:05:43.987 ******* 2026-02-13 06:21:55.931093 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:21:55.931096 | orchestrator | 2026-02-13 06:21:55.931100 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-13 06:21:55.931104 | orchestrator | Friday 13 February 2026 06:21:44 +0000 (0:00:01.115) 0:05:45.102 ******* 2026-02-13 06:21:55.931108 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931112 | orchestrator | 2026-02-13 06:21:55.931119 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-13 06:21:55.931130 | orchestrator | Friday 13 February 2026 06:21:45 +0000 (0:00:01.107) 0:05:46.210 ******* 2026-02-13 06:21:55.931136 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931142 | orchestrator | 2026-02-13 06:21:55.931148 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-13 06:21:55.931154 | orchestrator | Friday 13 February 2026 06:21:46 +0000 (0:00:01.217) 0:05:47.427 ******* 2026-02-13 06:21:55.931161 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931167 | orchestrator | 2026-02-13 06:21:55.931173 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-13 06:21:55.931179 | orchestrator | Friday 13 February 2026 06:21:47 +0000 (0:00:01.110) 0:05:48.538 ******* 2026-02-13 06:21:55.931186 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931191 | orchestrator | 2026-02-13 06:21:55.931204 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-13 06:21:55.931209 | orchestrator | Friday 13 February 2026 06:21:48 +0000 (0:00:01.096) 0:05:49.634 ******* 2026-02-13 06:21:55.931212 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931216 | orchestrator | 2026-02-13 06:21:55.931220 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-13 06:21:55.931224 | orchestrator | Friday 13 February 2026 06:21:50 +0000 (0:00:01.135) 0:05:50.770 ******* 2026-02-13 06:21:55.931228 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931232 | orchestrator | 2026-02-13 06:21:55.931235 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-13 06:21:55.931239 | orchestrator | Friday 13 February 2026 06:21:51 +0000 (0:00:01.158) 0:05:51.929 ******* 2026-02-13 06:21:55.931243 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931247 | orchestrator | 2026-02-13 06:21:55.931250 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-13 06:21:55.931254 | orchestrator | Friday 13 February 2026 06:21:52 +0000 (0:00:01.121) 0:05:53.050 ******* 2026-02-13 06:21:55.931258 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931262 | orchestrator | 2026-02-13 06:21:55.931266 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-13 06:21:55.931270 | orchestrator | Friday 13 February 2026 06:21:53 +0000 (0:00:01.123) 0:05:54.174 ******* 2026-02-13 06:21:55.931274 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:55.931278 | orchestrator | 2026-02-13 06:21:55.931281 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-13 06:21:55.931285 | orchestrator | Friday 13 February 2026 06:21:54 +0000 (0:00:01.158) 0:05:55.332 ******* 2026-02-13 06:21:55.931290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:55.931297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:55.931304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:55.931309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-13 06:21:55.931318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:55.931322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:55.931330 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:57.160714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-13 06:21:57.160838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:57.160855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-13 06:21:57.160868 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:21:57.160881 | orchestrator | 2026-02-13 06:21:57.160893 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-13 06:21:57.160906 | orchestrator | Friday 13 February 2026 06:21:55 +0000 (0:00:01.281) 0:05:56.614 ******* 2026-02-13 06:21:57.160920 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.160951 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.160964 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.160977 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-13-02-25-49-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.161003 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.161023 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.161036 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:21:57.161058 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '8816e0be', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1', 'scsi-SQEMU_QEMU_HARDDISK_8816e0be-b769-4c64-9a1e-16e9d78e3106-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:22:48.052335 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:22:48.052465 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-13 06:22:48.052490 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.052510 | orchestrator | 2026-02-13 06:22:48.052528 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-13 06:22:48.052547 | orchestrator | Friday 13 February 2026 06:21:57 +0000 (0:00:01.239) 0:05:57.853 ******* 2026-02-13 06:22:48.052564 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:22:48.052581 | orchestrator | 2026-02-13 06:22:48.052598 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-13 06:22:48.052616 | orchestrator | Friday 13 February 2026 06:21:58 +0000 (0:00:01.560) 0:05:59.414 ******* 2026-02-13 06:22:48.052632 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:22:48.052649 | orchestrator | 2026-02-13 06:22:48.052663 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 06:22:48.052764 | orchestrator | Friday 13 February 2026 06:21:59 +0000 (0:00:01.173) 0:06:00.587 ******* 2026-02-13 06:22:48.052783 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:22:48.052801 | orchestrator | 2026-02-13 06:22:48.052817 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 06:22:48.052834 | orchestrator | Friday 13 February 2026 06:22:01 +0000 (0:00:01.470) 0:06:02.058 ******* 2026-02-13 06:22:48.052850 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.052865 | orchestrator | 2026-02-13 06:22:48.052883 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-13 06:22:48.052900 | orchestrator | Friday 13 February 2026 06:22:02 +0000 (0:00:01.151) 0:06:03.210 ******* 2026-02-13 06:22:48.052917 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.052931 | orchestrator | 2026-02-13 06:22:48.052944 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-13 06:22:48.052959 | orchestrator | Friday 13 February 2026 06:22:03 +0000 (0:00:01.232) 0:06:04.442 ******* 2026-02-13 06:22:48.052973 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.052987 | orchestrator | 2026-02-13 06:22:48.053001 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 06:22:48.053015 | orchestrator | Friday 13 February 2026 06:22:04 +0000 (0:00:01.184) 0:06:05.627 ******* 2026-02-13 06:22:48.053030 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.053044 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 06:22:48.053058 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 06:22:48.053071 | orchestrator | 2026-02-13 06:22:48.053085 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 06:22:48.053099 | orchestrator | Friday 13 February 2026 06:22:06 +0000 (0:00:02.016) 0:06:07.643 ******* 2026-02-13 06:22:48.053139 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 06:22:48.053155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 06:22:48.053169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 06:22:48.053182 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.053195 | orchestrator | 2026-02-13 06:22:48.053208 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-13 06:22:48.053222 | orchestrator | Friday 13 February 2026 06:22:08 +0000 (0:00:01.165) 0:06:08.809 ******* 2026-02-13 06:22:48.053235 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.053249 | orchestrator | 2026-02-13 06:22:48.053261 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-13 06:22:48.053275 | orchestrator | Friday 13 February 2026 06:22:09 +0000 (0:00:01.130) 0:06:09.940 ******* 2026-02-13 06:22:48.053290 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.053303 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:22:48.053317 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:22:48.053330 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 06:22:48.053343 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 06:22:48.053357 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 06:22:48.053389 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 06:22:48.053404 | orchestrator | 2026-02-13 06:22:48.053426 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-13 06:22:48.053440 | orchestrator | Friday 13 February 2026 06:22:11 +0000 (0:00:02.259) 0:06:12.199 ******* 2026-02-13 06:22:48.053453 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.053467 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:22:48.053480 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:22:48.053493 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-13 06:22:48.053507 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-13 06:22:48.053520 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-13 06:22:48.053533 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-13 06:22:48.053546 | orchestrator | 2026-02-13 06:22:48.053560 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-13 06:22:48.053573 | orchestrator | Friday 13 February 2026 06:22:14 +0000 (0:00:02.994) 0:06:15.193 ******* 2026-02-13 06:22:48.053587 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 06:22:48.053601 | orchestrator | 2026-02-13 06:22:48.053614 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-13 06:22:48.053628 | orchestrator | Friday 13 February 2026 06:22:16 +0000 (0:00:02.278) 0:06:17.472 ******* 2026-02-13 06:22:48.053641 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.053655 | orchestrator | 2026-02-13 06:22:48.053670 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-13 06:22:48.053706 | orchestrator | Friday 13 February 2026 06:22:18 +0000 (0:00:01.234) 0:06:18.706 ******* 2026-02-13 06:22:48.053719 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.053733 | orchestrator | 2026-02-13 06:22:48.053746 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-13 06:22:48.053760 | orchestrator | Friday 13 February 2026 06:22:19 +0000 (0:00:01.128) 0:06:19.834 ******* 2026-02-13 06:22:48.053773 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] 2026-02-13 06:22:48.053795 | orchestrator | 2026-02-13 06:22:48.053808 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-13 06:22:48.053822 | orchestrator | Friday 13 February 2026 06:22:21 +0000 (0:00:02.243) 0:06:22.078 ******* 2026-02-13 06:22:48.053835 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:22:48.053849 | orchestrator | 2026-02-13 06:22:48.053863 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-13 06:22:48.053876 | orchestrator | Friday 13 February 2026 06:22:22 +0000 (0:00:01.101) 0:06:23.179 ******* 2026-02-13 06:22:48.053889 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.053902 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-13 06:22:48.053915 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-13 06:22:48.053929 | orchestrator | 2026-02-13 06:22:48.053942 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-13 06:22:48.053956 | orchestrator | Friday 13 February 2026 06:22:25 +0000 (0:00:02.583) 0:06:25.763 ******* 2026-02-13 06:22:48.053969 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-13 06:22:48.053982 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-13 06:22:48.053997 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-13 06:22:48.054011 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-13 06:22:48.054084 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-13 06:22:48.054098 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-13 06:22:48.054112 | orchestrator | 2026-02-13 06:22:48.054125 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-13 06:22:48.054138 | orchestrator | Friday 13 February 2026 06:22:38 +0000 (0:00:13.472) 0:06:39.236 ******* 2026-02-13 06:22:48.054151 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.054165 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:22:48.054179 | orchestrator | 2026-02-13 06:22:48.054193 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-13 06:22:48.054206 | orchestrator | Friday 13 February 2026 06:22:42 +0000 (0:00:04.046) 0:06:43.283 ******* 2026-02-13 06:22:48.054219 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:22:48.054233 | orchestrator | 2026-02-13 06:22:48.054246 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-13 06:22:48.054260 | orchestrator | Friday 13 February 2026 06:22:45 +0000 (0:00:02.463) 0:06:45.746 ******* 2026-02-13 06:22:48.054273 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-13 06:22:48.054286 | orchestrator | 2026-02-13 06:22:48.054299 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-13 06:22:48.054311 | orchestrator | Friday 13 February 2026 06:22:46 +0000 (0:00:01.479) 0:06:47.226 ******* 2026-02-13 06:22:48.054324 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-13 06:22:48.054337 | orchestrator | 2026-02-13 06:22:48.054360 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-13 06:23:39.113248 | orchestrator | Friday 13 February 2026 06:22:48 +0000 (0:00:01.509) 0:06:48.736 ******* 2026-02-13 06:23:39.113394 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.113423 | orchestrator | 2026-02-13 06:23:39.113445 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-13 06:23:39.113460 | orchestrator | Friday 13 February 2026 06:22:49 +0000 (0:00:01.523) 0:06:50.260 ******* 2026-02-13 06:23:39.113472 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113485 | orchestrator | 2026-02-13 06:23:39.113521 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-13 06:23:39.113533 | orchestrator | Friday 13 February 2026 06:22:50 +0000 (0:00:01.148) 0:06:51.408 ******* 2026-02-13 06:23:39.113543 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113554 | orchestrator | 2026-02-13 06:23:39.113565 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-13 06:23:39.113576 | orchestrator | Friday 13 February 2026 06:22:51 +0000 (0:00:01.140) 0:06:52.548 ******* 2026-02-13 06:23:39.113588 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113599 | orchestrator | 2026-02-13 06:23:39.113610 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-13 06:23:39.113620 | orchestrator | Friday 13 February 2026 06:22:53 +0000 (0:00:01.154) 0:06:53.703 ******* 2026-02-13 06:23:39.113631 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.113642 | orchestrator | 2026-02-13 06:23:39.113653 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-13 06:23:39.113664 | orchestrator | Friday 13 February 2026 06:22:54 +0000 (0:00:01.606) 0:06:55.310 ******* 2026-02-13 06:23:39.113713 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113725 | orchestrator | 2026-02-13 06:23:39.113735 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-13 06:23:39.113746 | orchestrator | Friday 13 February 2026 06:22:55 +0000 (0:00:01.159) 0:06:56.469 ******* 2026-02-13 06:23:39.113757 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113768 | orchestrator | 2026-02-13 06:23:39.113778 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-13 06:23:39.113789 | orchestrator | Friday 13 February 2026 06:22:56 +0000 (0:00:01.154) 0:06:57.624 ******* 2026-02-13 06:23:39.113800 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.113810 | orchestrator | 2026-02-13 06:23:39.113821 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-13 06:23:39.113832 | orchestrator | Friday 13 February 2026 06:22:58 +0000 (0:00:01.557) 0:06:59.182 ******* 2026-02-13 06:23:39.113852 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.113870 | orchestrator | 2026-02-13 06:23:39.113889 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-13 06:23:39.113908 | orchestrator | Friday 13 February 2026 06:23:00 +0000 (0:00:01.588) 0:07:00.771 ******* 2026-02-13 06:23:39.113926 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.113945 | orchestrator | 2026-02-13 06:23:39.113963 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-13 06:23:39.113981 | orchestrator | Friday 13 February 2026 06:23:01 +0000 (0:00:01.149) 0:07:01.920 ******* 2026-02-13 06:23:39.113998 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.114082 | orchestrator | 2026-02-13 06:23:39.114104 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-13 06:23:39.114125 | orchestrator | Friday 13 February 2026 06:23:02 +0000 (0:00:01.132) 0:07:03.052 ******* 2026-02-13 06:23:39.114144 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114163 | orchestrator | 2026-02-13 06:23:39.114175 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-13 06:23:39.114186 | orchestrator | Friday 13 February 2026 06:23:03 +0000 (0:00:01.126) 0:07:04.179 ******* 2026-02-13 06:23:39.114196 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114207 | orchestrator | 2026-02-13 06:23:39.114218 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-13 06:23:39.114229 | orchestrator | Friday 13 February 2026 06:23:04 +0000 (0:00:01.140) 0:07:05.319 ******* 2026-02-13 06:23:39.114239 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114250 | orchestrator | 2026-02-13 06:23:39.114261 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-13 06:23:39.114272 | orchestrator | Friday 13 February 2026 06:23:05 +0000 (0:00:01.134) 0:07:06.454 ******* 2026-02-13 06:23:39.114282 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114293 | orchestrator | 2026-02-13 06:23:39.114316 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-13 06:23:39.114327 | orchestrator | Friday 13 February 2026 06:23:06 +0000 (0:00:01.131) 0:07:07.585 ******* 2026-02-13 06:23:39.114337 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114348 | orchestrator | 2026-02-13 06:23:39.114359 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-13 06:23:39.114369 | orchestrator | Friday 13 February 2026 06:23:07 +0000 (0:00:01.108) 0:07:08.694 ******* 2026-02-13 06:23:39.114380 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.114391 | orchestrator | 2026-02-13 06:23:39.114402 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-13 06:23:39.114412 | orchestrator | Friday 13 February 2026 06:23:09 +0000 (0:00:01.174) 0:07:09.868 ******* 2026-02-13 06:23:39.114423 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.114434 | orchestrator | 2026-02-13 06:23:39.114444 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-13 06:23:39.114455 | orchestrator | Friday 13 February 2026 06:23:10 +0000 (0:00:01.182) 0:07:11.051 ******* 2026-02-13 06:23:39.114466 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.114477 | orchestrator | 2026-02-13 06:23:39.114487 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-13 06:23:39.114498 | orchestrator | Friday 13 February 2026 06:23:11 +0000 (0:00:01.161) 0:07:12.212 ******* 2026-02-13 06:23:39.114509 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114520 | orchestrator | 2026-02-13 06:23:39.114530 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-13 06:23:39.114541 | orchestrator | Friday 13 February 2026 06:23:12 +0000 (0:00:01.126) 0:07:13.339 ******* 2026-02-13 06:23:39.114552 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114563 | orchestrator | 2026-02-13 06:23:39.114603 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-13 06:23:39.114615 | orchestrator | Friday 13 February 2026 06:23:13 +0000 (0:00:01.158) 0:07:14.497 ******* 2026-02-13 06:23:39.114626 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114637 | orchestrator | 2026-02-13 06:23:39.114648 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-13 06:23:39.114658 | orchestrator | Friday 13 February 2026 06:23:14 +0000 (0:00:01.142) 0:07:15.640 ******* 2026-02-13 06:23:39.114697 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114710 | orchestrator | 2026-02-13 06:23:39.114721 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-13 06:23:39.114732 | orchestrator | Friday 13 February 2026 06:23:16 +0000 (0:00:01.133) 0:07:16.774 ******* 2026-02-13 06:23:39.114742 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114753 | orchestrator | 2026-02-13 06:23:39.114764 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-13 06:23:39.114774 | orchestrator | Friday 13 February 2026 06:23:17 +0000 (0:00:01.117) 0:07:17.892 ******* 2026-02-13 06:23:39.114785 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114796 | orchestrator | 2026-02-13 06:23:39.114807 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-13 06:23:39.114818 | orchestrator | Friday 13 February 2026 06:23:18 +0000 (0:00:01.127) 0:07:19.020 ******* 2026-02-13 06:23:39.114829 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114840 | orchestrator | 2026-02-13 06:23:39.114850 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-13 06:23:39.114862 | orchestrator | Friday 13 February 2026 06:23:19 +0000 (0:00:01.110) 0:07:20.130 ******* 2026-02-13 06:23:39.114873 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114884 | orchestrator | 2026-02-13 06:23:39.114894 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-13 06:23:39.114905 | orchestrator | Friday 13 February 2026 06:23:20 +0000 (0:00:01.116) 0:07:21.246 ******* 2026-02-13 06:23:39.114915 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114934 | orchestrator | 2026-02-13 06:23:39.114945 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-13 06:23:39.114955 | orchestrator | Friday 13 February 2026 06:23:21 +0000 (0:00:01.122) 0:07:22.369 ******* 2026-02-13 06:23:39.114966 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.114977 | orchestrator | 2026-02-13 06:23:39.114988 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-13 06:23:39.114999 | orchestrator | Friday 13 February 2026 06:23:22 +0000 (0:00:01.113) 0:07:23.482 ******* 2026-02-13 06:23:39.115009 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115020 | orchestrator | 2026-02-13 06:23:39.115031 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-13 06:23:39.115042 | orchestrator | Friday 13 February 2026 06:23:23 +0000 (0:00:01.107) 0:07:24.590 ******* 2026-02-13 06:23:39.115052 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115063 | orchestrator | 2026-02-13 06:23:39.115074 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-13 06:23:39.115085 | orchestrator | Friday 13 February 2026 06:23:25 +0000 (0:00:01.124) 0:07:25.714 ******* 2026-02-13 06:23:39.115095 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.115106 | orchestrator | 2026-02-13 06:23:39.115117 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-13 06:23:39.115128 | orchestrator | Friday 13 February 2026 06:23:27 +0000 (0:00:02.016) 0:07:27.731 ******* 2026-02-13 06:23:39.115138 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.115149 | orchestrator | 2026-02-13 06:23:39.115160 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-13 06:23:39.115171 | orchestrator | Friday 13 February 2026 06:23:29 +0000 (0:00:02.448) 0:07:30.179 ******* 2026-02-13 06:23:39.115181 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-13 06:23:39.115193 | orchestrator | 2026-02-13 06:23:39.115204 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-13 06:23:39.115215 | orchestrator | Friday 13 February 2026 06:23:30 +0000 (0:00:01.480) 0:07:31.660 ******* 2026-02-13 06:23:39.115226 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115237 | orchestrator | 2026-02-13 06:23:39.115248 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-13 06:23:39.115258 | orchestrator | Friday 13 February 2026 06:23:32 +0000 (0:00:01.206) 0:07:32.867 ******* 2026-02-13 06:23:39.115269 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115280 | orchestrator | 2026-02-13 06:23:39.115291 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-13 06:23:39.115302 | orchestrator | Friday 13 February 2026 06:23:33 +0000 (0:00:01.141) 0:07:34.008 ******* 2026-02-13 06:23:39.115312 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-13 06:23:39.115323 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-13 06:23:39.115334 | orchestrator | 2026-02-13 06:23:39.115344 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-13 06:23:39.115355 | orchestrator | Friday 13 February 2026 06:23:35 +0000 (0:00:01.844) 0:07:35.853 ******* 2026-02-13 06:23:39.115366 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:23:39.115377 | orchestrator | 2026-02-13 06:23:39.115388 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-13 06:23:39.115398 | orchestrator | Friday 13 February 2026 06:23:36 +0000 (0:00:01.653) 0:07:37.506 ******* 2026-02-13 06:23:39.115409 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115420 | orchestrator | 2026-02-13 06:23:39.115430 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-13 06:23:39.115441 | orchestrator | Friday 13 February 2026 06:23:37 +0000 (0:00:01.138) 0:07:38.645 ******* 2026-02-13 06:23:39.115452 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:23:39.115463 | orchestrator | 2026-02-13 06:23:39.115474 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-13 06:23:39.115503 | orchestrator | Friday 13 February 2026 06:23:39 +0000 (0:00:01.155) 0:07:39.800 ******* 2026-02-13 06:24:26.611455 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.611600 | orchestrator | 2026-02-13 06:24:26.611627 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-13 06:24:26.611648 | orchestrator | Friday 13 February 2026 06:23:40 +0000 (0:00:01.139) 0:07:40.940 ******* 2026-02-13 06:24:26.611737 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-13 06:24:26.611760 | orchestrator | 2026-02-13 06:24:26.611779 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-13 06:24:26.611799 | orchestrator | Friday 13 February 2026 06:23:41 +0000 (0:00:01.492) 0:07:42.433 ******* 2026-02-13 06:24:26.611818 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:24:26.611838 | orchestrator | 2026-02-13 06:24:26.611857 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-13 06:24:26.611877 | orchestrator | Friday 13 February 2026 06:23:43 +0000 (0:00:01.756) 0:07:44.189 ******* 2026-02-13 06:24:26.611896 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-13 06:24:26.611916 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-13 06:24:26.611934 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-13 06:24:26.611953 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.611973 | orchestrator | 2026-02-13 06:24:26.611993 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-13 06:24:26.612013 | orchestrator | Friday 13 February 2026 06:23:44 +0000 (0:00:01.173) 0:07:45.363 ******* 2026-02-13 06:24:26.612034 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612054 | orchestrator | 2026-02-13 06:24:26.612074 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-13 06:24:26.612094 | orchestrator | Friday 13 February 2026 06:23:45 +0000 (0:00:01.109) 0:07:46.472 ******* 2026-02-13 06:24:26.612116 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612136 | orchestrator | 2026-02-13 06:24:26.612157 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-13 06:24:26.612178 | orchestrator | Friday 13 February 2026 06:23:46 +0000 (0:00:01.153) 0:07:47.626 ******* 2026-02-13 06:24:26.612199 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612220 | orchestrator | 2026-02-13 06:24:26.612241 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-13 06:24:26.612260 | orchestrator | Friday 13 February 2026 06:23:48 +0000 (0:00:01.131) 0:07:48.757 ******* 2026-02-13 06:24:26.612280 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612298 | orchestrator | 2026-02-13 06:24:26.612316 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-13 06:24:26.612336 | orchestrator | Friday 13 February 2026 06:23:49 +0000 (0:00:01.152) 0:07:49.910 ******* 2026-02-13 06:24:26.612356 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612375 | orchestrator | 2026-02-13 06:24:26.612394 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-13 06:24:26.612413 | orchestrator | Friday 13 February 2026 06:23:50 +0000 (0:00:01.174) 0:07:51.084 ******* 2026-02-13 06:24:26.612430 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:24:26.612449 | orchestrator | 2026-02-13 06:24:26.612467 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-13 06:24:26.612488 | orchestrator | Friday 13 February 2026 06:23:52 +0000 (0:00:02.530) 0:07:53.615 ******* 2026-02-13 06:24:26.612505 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:24:26.612524 | orchestrator | 2026-02-13 06:24:26.612543 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-13 06:24:26.612564 | orchestrator | Friday 13 February 2026 06:23:54 +0000 (0:00:01.177) 0:07:54.792 ******* 2026-02-13 06:24:26.612584 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-13 06:24:26.612635 | orchestrator | 2026-02-13 06:24:26.612656 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-13 06:24:26.612709 | orchestrator | Friday 13 February 2026 06:23:55 +0000 (0:00:01.443) 0:07:56.236 ******* 2026-02-13 06:24:26.612728 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612747 | orchestrator | 2026-02-13 06:24:26.612765 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-13 06:24:26.612785 | orchestrator | Friday 13 February 2026 06:23:56 +0000 (0:00:01.141) 0:07:57.378 ******* 2026-02-13 06:24:26.612802 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612821 | orchestrator | 2026-02-13 06:24:26.612839 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-13 06:24:26.612857 | orchestrator | Friday 13 February 2026 06:23:57 +0000 (0:00:01.113) 0:07:58.491 ******* 2026-02-13 06:24:26.612877 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612896 | orchestrator | 2026-02-13 06:24:26.612915 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-13 06:24:26.612934 | orchestrator | Friday 13 February 2026 06:23:58 +0000 (0:00:01.122) 0:07:59.614 ******* 2026-02-13 06:24:26.612954 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.612972 | orchestrator | 2026-02-13 06:24:26.612990 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-13 06:24:26.613007 | orchestrator | Friday 13 February 2026 06:24:00 +0000 (0:00:01.126) 0:08:00.740 ******* 2026-02-13 06:24:26.613025 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613045 | orchestrator | 2026-02-13 06:24:26.613063 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-13 06:24:26.613082 | orchestrator | Friday 13 February 2026 06:24:01 +0000 (0:00:01.146) 0:08:01.887 ******* 2026-02-13 06:24:26.613101 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613119 | orchestrator | 2026-02-13 06:24:26.613137 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-13 06:24:26.613156 | orchestrator | Friday 13 February 2026 06:24:02 +0000 (0:00:01.134) 0:08:03.021 ******* 2026-02-13 06:24:26.613174 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613192 | orchestrator | 2026-02-13 06:24:26.613256 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-13 06:24:26.613275 | orchestrator | Friday 13 February 2026 06:24:03 +0000 (0:00:01.132) 0:08:04.153 ******* 2026-02-13 06:24:26.613291 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613307 | orchestrator | 2026-02-13 06:24:26.613324 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-13 06:24:26.613340 | orchestrator | Friday 13 February 2026 06:24:04 +0000 (0:00:01.121) 0:08:05.274 ******* 2026-02-13 06:24:26.613356 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:24:26.613372 | orchestrator | 2026-02-13 06:24:26.613388 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-13 06:24:26.613404 | orchestrator | Friday 13 February 2026 06:24:05 +0000 (0:00:01.160) 0:08:06.435 ******* 2026-02-13 06:24:26.613421 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-13 06:24:26.613437 | orchestrator | 2026-02-13 06:24:26.613454 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-13 06:24:26.613471 | orchestrator | Friday 13 February 2026 06:24:07 +0000 (0:00:01.451) 0:08:07.886 ******* 2026-02-13 06:24:26.613487 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-13 06:24:26.613504 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-13 06:24:26.613521 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-13 06:24:26.613537 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-13 06:24:26.613554 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-13 06:24:26.613571 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-13 06:24:26.613586 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-13 06:24:26.613619 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-13 06:24:26.613636 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-13 06:24:26.613652 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-13 06:24:26.613694 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-13 06:24:26.613711 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-13 06:24:26.613728 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-13 06:24:26.613745 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-13 06:24:26.613761 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-13 06:24:26.613777 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-13 06:24:26.613793 | orchestrator | 2026-02-13 06:24:26.613811 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-13 06:24:26.613827 | orchestrator | Friday 13 February 2026 06:24:14 +0000 (0:00:06.951) 0:08:14.838 ******* 2026-02-13 06:24:26.613845 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613863 | orchestrator | 2026-02-13 06:24:26.613881 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-13 06:24:26.613899 | orchestrator | Friday 13 February 2026 06:24:15 +0000 (0:00:01.091) 0:08:15.930 ******* 2026-02-13 06:24:26.613917 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.613934 | orchestrator | 2026-02-13 06:24:26.613951 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-13 06:24:26.613967 | orchestrator | Friday 13 February 2026 06:24:16 +0000 (0:00:01.132) 0:08:17.063 ******* 2026-02-13 06:24:26.613984 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614000 | orchestrator | 2026-02-13 06:24:26.614104 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-13 06:24:26.614125 | orchestrator | Friday 13 February 2026 06:24:17 +0000 (0:00:01.138) 0:08:18.201 ******* 2026-02-13 06:24:26.614142 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614160 | orchestrator | 2026-02-13 06:24:26.614176 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-13 06:24:26.614193 | orchestrator | Friday 13 February 2026 06:24:18 +0000 (0:00:01.151) 0:08:19.353 ******* 2026-02-13 06:24:26.614210 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614226 | orchestrator | 2026-02-13 06:24:26.614243 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-13 06:24:26.614259 | orchestrator | Friday 13 February 2026 06:24:19 +0000 (0:00:01.115) 0:08:20.469 ******* 2026-02-13 06:24:26.614275 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614291 | orchestrator | 2026-02-13 06:24:26.614307 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-13 06:24:26.614323 | orchestrator | Friday 13 February 2026 06:24:20 +0000 (0:00:01.122) 0:08:21.592 ******* 2026-02-13 06:24:26.614338 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614354 | orchestrator | 2026-02-13 06:24:26.614370 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-13 06:24:26.614387 | orchestrator | Friday 13 February 2026 06:24:22 +0000 (0:00:01.150) 0:08:22.742 ******* 2026-02-13 06:24:26.614403 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614420 | orchestrator | 2026-02-13 06:24:26.614436 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-13 06:24:26.614453 | orchestrator | Friday 13 February 2026 06:24:23 +0000 (0:00:01.145) 0:08:23.887 ******* 2026-02-13 06:24:26.614469 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614484 | orchestrator | 2026-02-13 06:24:26.614500 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-13 06:24:26.614517 | orchestrator | Friday 13 February 2026 06:24:24 +0000 (0:00:01.114) 0:08:25.002 ******* 2026-02-13 06:24:26.614547 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614565 | orchestrator | 2026-02-13 06:24:26.614581 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-13 06:24:26.614597 | orchestrator | Friday 13 February 2026 06:24:25 +0000 (0:00:01.141) 0:08:26.144 ******* 2026-02-13 06:24:26.614621 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:24:26.614638 | orchestrator | 2026-02-13 06:24:26.614728 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-13 06:25:20.982182 | orchestrator | Friday 13 February 2026 06:24:26 +0000 (0:00:01.154) 0:08:27.299 ******* 2026-02-13 06:25:20.982274 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982285 | orchestrator | 2026-02-13 06:25:20.982293 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-13 06:25:20.982300 | orchestrator | Friday 13 February 2026 06:24:27 +0000 (0:00:01.096) 0:08:28.395 ******* 2026-02-13 06:25:20.982307 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982313 | orchestrator | 2026-02-13 06:25:20.982319 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-13 06:25:20.982325 | orchestrator | Friday 13 February 2026 06:24:28 +0000 (0:00:01.212) 0:08:29.608 ******* 2026-02-13 06:25:20.982331 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982337 | orchestrator | 2026-02-13 06:25:20.982344 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-13 06:25:20.982350 | orchestrator | Friday 13 February 2026 06:24:30 +0000 (0:00:01.159) 0:08:30.768 ******* 2026-02-13 06:25:20.982357 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982367 | orchestrator | 2026-02-13 06:25:20.982376 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-13 06:25:20.982386 | orchestrator | Friday 13 February 2026 06:24:31 +0000 (0:00:01.265) 0:08:32.033 ******* 2026-02-13 06:25:20.982397 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982405 | orchestrator | 2026-02-13 06:25:20.982415 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-13 06:25:20.982424 | orchestrator | Friday 13 February 2026 06:24:32 +0000 (0:00:01.099) 0:08:33.132 ******* 2026-02-13 06:25:20.982434 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982443 | orchestrator | 2026-02-13 06:25:20.982452 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-13 06:25:20.982463 | orchestrator | Friday 13 February 2026 06:24:33 +0000 (0:00:01.107) 0:08:34.240 ******* 2026-02-13 06:25:20.982473 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982483 | orchestrator | 2026-02-13 06:25:20.982493 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-13 06:25:20.982503 | orchestrator | Friday 13 February 2026 06:24:34 +0000 (0:00:01.127) 0:08:35.368 ******* 2026-02-13 06:25:20.982513 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982523 | orchestrator | 2026-02-13 06:25:20.982533 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-13 06:25:20.982543 | orchestrator | Friday 13 February 2026 06:24:35 +0000 (0:00:01.134) 0:08:36.503 ******* 2026-02-13 06:25:20.982552 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982563 | orchestrator | 2026-02-13 06:25:20.982572 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-13 06:25:20.982581 | orchestrator | Friday 13 February 2026 06:24:36 +0000 (0:00:01.115) 0:08:37.619 ******* 2026-02-13 06:25:20.982589 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982598 | orchestrator | 2026-02-13 06:25:20.982607 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-13 06:25:20.982616 | orchestrator | Friday 13 February 2026 06:24:38 +0000 (0:00:01.119) 0:08:38.738 ******* 2026-02-13 06:25:20.982625 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 06:25:20.982634 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 06:25:20.982643 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 06:25:20.982738 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982750 | orchestrator | 2026-02-13 06:25:20.982760 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-13 06:25:20.982769 | orchestrator | Friday 13 February 2026 06:24:39 +0000 (0:00:01.386) 0:08:40.124 ******* 2026-02-13 06:25:20.982779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 06:25:20.982789 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 06:25:20.982798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 06:25:20.982807 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982816 | orchestrator | 2026-02-13 06:25:20.982825 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-13 06:25:20.982834 | orchestrator | Friday 13 February 2026 06:24:40 +0000 (0:00:01.400) 0:08:41.525 ******* 2026-02-13 06:25:20.982845 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-13 06:25:20.982855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-13 06:25:20.982865 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-13 06:25:20.982875 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982885 | orchestrator | 2026-02-13 06:25:20.982894 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-13 06:25:20.982903 | orchestrator | Friday 13 February 2026 06:24:42 +0000 (0:00:01.396) 0:08:42.922 ******* 2026-02-13 06:25:20.982912 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982923 | orchestrator | 2026-02-13 06:25:20.982932 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-13 06:25:20.982942 | orchestrator | Friday 13 February 2026 06:24:43 +0000 (0:00:01.132) 0:08:44.054 ******* 2026-02-13 06:25:20.982952 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-13 06:25:20.982962 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.982972 | orchestrator | 2026-02-13 06:25:20.982981 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-13 06:25:20.982992 | orchestrator | Friday 13 February 2026 06:24:44 +0000 (0:00:01.339) 0:08:45.394 ******* 2026-02-13 06:25:20.983002 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983012 | orchestrator | 2026-02-13 06:25:20.983024 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-13 06:25:20.983034 | orchestrator | Friday 13 February 2026 06:24:46 +0000 (0:00:01.963) 0:08:47.358 ******* 2026-02-13 06:25:20.983062 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983073 | orchestrator | 2026-02-13 06:25:20.983083 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-13 06:25:20.983114 | orchestrator | Friday 13 February 2026 06:24:47 +0000 (0:00:01.138) 0:08:48.497 ******* 2026-02-13 06:25:20.983125 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-13 06:25:20.983137 | orchestrator | 2026-02-13 06:25:20.983147 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-13 06:25:20.983158 | orchestrator | Friday 13 February 2026 06:24:49 +0000 (0:00:01.476) 0:08:49.974 ******* 2026-02-13 06:25:20.983168 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-13 06:25:20.983178 | orchestrator | 2026-02-13 06:25:20.983188 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-13 06:25:20.983194 | orchestrator | Friday 13 February 2026 06:24:52 +0000 (0:00:03.579) 0:08:53.554 ******* 2026-02-13 06:25:20.983200 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.983206 | orchestrator | 2026-02-13 06:25:20.983212 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-13 06:25:20.983217 | orchestrator | Friday 13 February 2026 06:24:54 +0000 (0:00:01.190) 0:08:54.744 ******* 2026-02-13 06:25:20.983223 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983229 | orchestrator | 2026-02-13 06:25:20.983235 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-13 06:25:20.983250 | orchestrator | Friday 13 February 2026 06:24:55 +0000 (0:00:01.150) 0:08:55.894 ******* 2026-02-13 06:25:20.983256 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983262 | orchestrator | 2026-02-13 06:25:20.983268 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-13 06:25:20.983274 | orchestrator | Friday 13 February 2026 06:24:56 +0000 (0:00:01.225) 0:08:57.120 ******* 2026-02-13 06:25:20.983280 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:25:20.983285 | orchestrator | 2026-02-13 06:25:20.983291 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-13 06:25:20.983297 | orchestrator | Friday 13 February 2026 06:24:58 +0000 (0:00:02.039) 0:08:59.160 ******* 2026-02-13 06:25:20.983303 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983309 | orchestrator | 2026-02-13 06:25:20.983315 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-13 06:25:20.983321 | orchestrator | Friday 13 February 2026 06:25:00 +0000 (0:00:01.604) 0:09:00.764 ******* 2026-02-13 06:25:20.983326 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983332 | orchestrator | 2026-02-13 06:25:20.983338 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-13 06:25:20.983344 | orchestrator | Friday 13 February 2026 06:25:01 +0000 (0:00:01.478) 0:09:02.242 ******* 2026-02-13 06:25:20.983350 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983355 | orchestrator | 2026-02-13 06:25:20.983361 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-13 06:25:20.983367 | orchestrator | Friday 13 February 2026 06:25:03 +0000 (0:00:01.465) 0:09:03.708 ******* 2026-02-13 06:25:20.983373 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983379 | orchestrator | 2026-02-13 06:25:20.983384 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-13 06:25:20.983391 | orchestrator | Friday 13 February 2026 06:25:04 +0000 (0:00:01.681) 0:09:05.389 ******* 2026-02-13 06:25:20.983396 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983402 | orchestrator | 2026-02-13 06:25:20.983408 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-13 06:25:20.983414 | orchestrator | Friday 13 February 2026 06:25:06 +0000 (0:00:01.708) 0:09:07.098 ******* 2026-02-13 06:25:20.983420 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-13 06:25:20.983426 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-13 06:25:20.983432 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-13 06:25:20.983438 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-13 06:25:20.983443 | orchestrator | 2026-02-13 06:25:20.983449 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-13 06:25:20.983455 | orchestrator | Friday 13 February 2026 06:25:10 +0000 (0:00:03.967) 0:09:11.065 ******* 2026-02-13 06:25:20.983461 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:25:20.983467 | orchestrator | 2026-02-13 06:25:20.983473 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-13 06:25:20.983478 | orchestrator | Friday 13 February 2026 06:25:12 +0000 (0:00:02.064) 0:09:13.130 ******* 2026-02-13 06:25:20.983484 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983490 | orchestrator | 2026-02-13 06:25:20.983496 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-13 06:25:20.983502 | orchestrator | Friday 13 February 2026 06:25:13 +0000 (0:00:01.142) 0:09:14.272 ******* 2026-02-13 06:25:20.983507 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983515 | orchestrator | 2026-02-13 06:25:20.983525 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-13 06:25:20.983535 | orchestrator | Friday 13 February 2026 06:25:14 +0000 (0:00:01.226) 0:09:15.499 ******* 2026-02-13 06:25:20.983544 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983554 | orchestrator | 2026-02-13 06:25:20.983562 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-13 06:25:20.983576 | orchestrator | Friday 13 February 2026 06:25:16 +0000 (0:00:02.044) 0:09:17.543 ******* 2026-02-13 06:25:20.983586 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:25:20.983596 | orchestrator | 2026-02-13 06:25:20.983606 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-13 06:25:20.983617 | orchestrator | Friday 13 February 2026 06:25:18 +0000 (0:00:01.529) 0:09:19.073 ******* 2026-02-13 06:25:20.983627 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:25:20.983637 | orchestrator | 2026-02-13 06:25:20.983645 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-13 06:25:20.983651 | orchestrator | Friday 13 February 2026 06:25:19 +0000 (0:00:01.100) 0:09:20.174 ******* 2026-02-13 06:25:20.983692 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-13 06:25:20.983700 | orchestrator | 2026-02-13 06:25:20.983706 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-13 06:25:20.983718 | orchestrator | Friday 13 February 2026 06:25:20 +0000 (0:00:01.493) 0:09:21.667 ******* 2026-02-13 06:36:59.182704 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:36:59.182795 | orchestrator | 2026-02-13 06:36:59.182805 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-13 06:36:59.182813 | orchestrator | Friday 13 February 2026 06:25:22 +0000 (0:00:01.127) 0:09:22.795 ******* 2026-02-13 06:36:59.182819 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:36:59.182825 | orchestrator | 2026-02-13 06:36:59.182831 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-13 06:36:59.182837 | orchestrator | Friday 13 February 2026 06:25:23 +0000 (0:00:01.155) 0:09:23.950 ******* 2026-02-13 06:36:59.182844 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-13 06:36:59.182849 | orchestrator | 2026-02-13 06:36:59.182855 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-13 06:36:59.182861 | orchestrator | Friday 13 February 2026 06:25:24 +0000 (0:00:01.529) 0:09:25.480 ******* 2026-02-13 06:36:59.182868 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:36:59.182874 | orchestrator | 2026-02-13 06:36:59.182880 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-13 06:36:59.182886 | orchestrator | Friday 13 February 2026 06:25:27 +0000 (0:00:02.411) 0:09:27.891 ******* 2026-02-13 06:36:59.182892 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:36:59.182898 | orchestrator | 2026-02-13 06:36:59.182904 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-13 06:36:59.182909 | orchestrator | Friday 13 February 2026 06:25:29 +0000 (0:00:02.003) 0:09:29.895 ******* 2026-02-13 06:36:59.182915 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:36:59.182921 | orchestrator | 2026-02-13 06:36:59.182927 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-13 06:36:59.182932 | orchestrator | Friday 13 February 2026 06:25:31 +0000 (0:00:02.355) 0:09:32.251 ******* 2026-02-13 06:36:59.182938 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:36:59.182944 | orchestrator | 2026-02-13 06:36:59.182950 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-13 06:36:59.182956 | orchestrator | Friday 13 February 2026 06:25:34 +0000 (0:00:03.225) 0:09:35.477 ******* 2026-02-13 06:36:59.182961 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-13 06:36:59.182968 | orchestrator | 2026-02-13 06:36:59.182974 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-13 06:36:59.182979 | orchestrator | Friday 13 February 2026 06:25:36 +0000 (0:00:01.647) 0:09:37.125 ******* 2026-02-13 06:36:59.182985 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-13 06:36:59.182991 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:36:59.182997 | orchestrator | 2026-02-13 06:36:59.183003 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-13 06:36:59.183008 | orchestrator | Friday 13 February 2026 06:25:59 +0000 (0:00:23.035) 0:10:00.160 ******* 2026-02-13 06:36:59.183032 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:36:59.183038 | orchestrator | 2026-02-13 06:36:59.183044 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-13 06:36:59.183050 | orchestrator | Friday 13 February 2026 06:26:02 +0000 (0:00:03.100) 0:10:03.261 ******* 2026-02-13 06:36:59.183056 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:36:59.183061 | orchestrator | 2026-02-13 06:36:59.183067 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-13 06:36:59.183074 | orchestrator | Friday 13 February 2026 06:26:03 +0000 (0:00:01.115) 0:10:04.376 ******* 2026-02-13 06:36:59.183081 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-13 06:36:59.183089 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-13 06:36:59.183095 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-13 06:36:59.183100 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-13 06:36:59.183131 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-13 06:36:59.183139 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__34d195e7f4d88aa7aaa73f68d6625bc200c0d2cf'}])  2026-02-13 06:36:59.183147 | orchestrator | 2026-02-13 06:36:59.183153 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-13 06:36:59.183158 | orchestrator | Friday 13 February 2026 06:26:13 +0000 (0:00:09.961) 0:10:14.338 ******* 2026-02-13 06:36:59.183164 | orchestrator | changed: [testbed-node-0] 2026-02-13 06:36:59.183170 | orchestrator | 2026-02-13 06:36:59.183176 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-13 06:36:59.183181 | orchestrator | Friday 13 February 2026 06:26:16 +0000 (0:00:02.496) 0:10:16.835 ******* 2026-02-13 06:36:59.183187 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-13 06:36:59.183193 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-13 06:36:59.183199 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-13 06:36:59.183205 | orchestrator | 2026-02-13 06:36:59.183212 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-13 06:36:59.183224 | orchestrator | Friday 13 February 2026 06:26:18 +0000 (0:00:01.964) 0:10:18.800 ******* 2026-02-13 06:36:59.183231 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-13 06:36:59.183238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-13 06:36:59.183244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-13 06:36:59.183251 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:36:59.183257 | orchestrator | 2026-02-13 06:36:59.183264 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-13 06:36:59.183271 | orchestrator | Friday 13 February 2026 06:26:19 +0000 (0:00:01.329) 0:10:20.129 ******* 2026-02-13 06:36:59.183277 | orchestrator | skipping: [testbed-node-0] 2026-02-13 06:36:59.183284 | orchestrator | 2026-02-13 06:36:59.183291 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-13 06:36:59.183306 | orchestrator | Friday 13 February 2026 06:26:20 +0000 (0:00:01.164) 0:10:21.294 ******* 2026-02-13 06:36:59.183313 | orchestrator | 2026-02-13 06:36:59.183320 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183327 | orchestrator | 2026-02-13 06:36:59.183334 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183341 | orchestrator | 2026-02-13 06:36:59.183347 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183354 | orchestrator | 2026-02-13 06:36:59.183361 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183367 | orchestrator | 2026-02-13 06:36:59.183374 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183381 | orchestrator | 2026-02-13 06:36:59.183387 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183394 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (5 retries left). 2026-02-13 06:36:59.183401 | orchestrator | 2026-02-13 06:36:59.183407 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183414 | orchestrator | 2026-02-13 06:36:59.183421 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183427 | orchestrator | 2026-02-13 06:36:59.183434 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183441 | orchestrator | 2026-02-13 06:36:59.183448 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183454 | orchestrator | 2026-02-13 06:36:59.183461 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183467 | orchestrator | 2026-02-13 06:36:59.183474 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183481 | orchestrator | 2026-02-13 06:36:59.183487 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183494 | orchestrator | 2026-02-13 06:36:59.183501 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183507 | orchestrator | 2026-02-13 06:36:59.183514 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183520 | orchestrator | 2026-02-13 06:36:59.183526 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183533 | orchestrator | 2026-02-13 06:36:59.183539 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:36:59.183554 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (4 retries left). 2026-02-13 06:36:59.183591 | orchestrator | 2026-02-13 06:36:59.183602 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205470 | orchestrator | 2026-02-13 06:57:47.205580 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205594 | orchestrator | 2026-02-13 06:57:47.205601 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205607 | orchestrator | 2026-02-13 06:57:47.205614 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205622 | orchestrator | 2026-02-13 06:57:47.205629 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205636 | orchestrator | 2026-02-13 06:57:47.205643 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205650 | orchestrator | 2026-02-13 06:57:47.205657 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205664 | orchestrator | 2026-02-13 06:57:47.205671 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205678 | orchestrator | 2026-02-13 06:57:47.205686 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205694 | orchestrator | 2026-02-13 06:57:47.205701 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205708 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (3 retries left). 2026-02-13 06:57:47.205717 | orchestrator | 2026-02-13 06:57:47.205725 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205732 | orchestrator | 2026-02-13 06:57:47.205737 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205741 | orchestrator | 2026-02-13 06:57:47.205746 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205751 | orchestrator | 2026-02-13 06:57:47.205755 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205760 | orchestrator | 2026-02-13 06:57:47.205764 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205769 | orchestrator | 2026-02-13 06:57:47.205773 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205778 | orchestrator | 2026-02-13 06:57:47.205782 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205787 | orchestrator | 2026-02-13 06:57:47.205791 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205795 | orchestrator | 2026-02-13 06:57:47.205800 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205804 | orchestrator | 2026-02-13 06:57:47.205808 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205813 | orchestrator | 2026-02-13 06:57:47.205817 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205822 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (2 retries left). 2026-02-13 06:57:47.205826 | orchestrator | 2026-02-13 06:57:47.205830 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205853 | orchestrator | 2026-02-13 06:57:47.205857 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205862 | orchestrator | 2026-02-13 06:57:47.205866 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205870 | orchestrator | 2026-02-13 06:57:47.205875 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205879 | orchestrator | 2026-02-13 06:57:47.205883 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205888 | orchestrator | 2026-02-13 06:57:47.205892 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205897 | orchestrator | 2026-02-13 06:57:47.205901 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205905 | orchestrator | 2026-02-13 06:57:47.205910 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205914 | orchestrator | 2026-02-13 06:57:47.205918 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205923 | orchestrator | 2026-02-13 06:57:47.205927 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205932 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Container | waiting for the containerized monitor to join the quorum... (1 retries left). 2026-02-13 06:57:47.205936 | orchestrator | 2026-02-13 06:57:47.205941 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205948 | orchestrator | 2026-02-13 06:57:47.205967 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205974 | orchestrator | 2026-02-13 06:57:47.205981 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.205988 | orchestrator | 2026-02-13 06:57:47.206009 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206040 | orchestrator | 2026-02-13 06:57:47.206046 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206051 | orchestrator | 2026-02-13 06:57:47.206056 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206061 | orchestrator | 2026-02-13 06:57:47.206066 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206071 | orchestrator | 2026-02-13 06:57:47.206076 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206081 | orchestrator | 2026-02-13 06:57:47.206086 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206091 | orchestrator | 2026-02-13 06:57:47.206096 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206101 | orchestrator | 2026-02-13 06:57:47.206106 | orchestrator | STILL ALIVE [task 'Container | waiting for the containerized monitor to join the quorum...' is running] *** 2026-02-13 06:57:47.206113 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"attempts": 5, "changed": false, "cmd": ["docker", "exec", "ceph-mon-testbed-node-0", "ceph", "--cluster", "ceph", "-m", "192.168.16.8", "quorum_status", "--format", "json"], "delta": "0:05:00.273703", "end": "2026-02-13 06:57:39.423641", "msg": "non-zero return code", "rc": 1, "start": "2026-02-13 06:52:39.149938", "stderr": "2026-02-13T06:57:39.403+0000 72ee2072f640 0 monclient(hunting): authenticate timed out after 300\n[errno 110] RADOS timed out (error connecting to the cluster)", "stderr_lines": ["2026-02-13T06:57:39.403+0000 72ee2072f640 0 monclient(hunting): authenticate timed out after 300", "[errno 110] RADOS timed out (error connecting to the cluster)"], "stdout": "", "stdout_lines": []} 2026-02-13 06:57:47.206125 | orchestrator | 2026-02-13 06:57:47.206131 | orchestrator | TASK [Unmask the mon service] ************************************************** 2026-02-13 06:57:47.206136 | orchestrator | Friday 13 February 2026 06:57:40 +0000 (0:31:20.373) 0:41:41.667 ******* 2026-02-13 06:57:47.206141 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:57:47.206147 | orchestrator | 2026-02-13 06:57:47.206153 | orchestrator | TASK [Unmask the mgr service] ************************************************** 2026-02-13 06:57:47.206158 | orchestrator | Friday 13 February 2026 06:57:42 +0000 (0:00:01.841) 0:41:43.509 ******* 2026-02-13 06:57:47.206163 | orchestrator | ok: [testbed-node-0] 2026-02-13 06:57:47.206168 | orchestrator | 2026-02-13 06:57:47.206172 | orchestrator | TASK [Stop the playbook execution] ********************************************* 2026-02-13 06:57:47.206178 | orchestrator | Friday 13 February 2026 06:57:44 +0000 (0:00:01.813) 0:41:45.323 ******* 2026-02-13 06:57:47.206183 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "msg": "There was an error during monitor upgrade. Please, check the previous task results."} 2026-02-13 06:57:47.206190 | orchestrator | 2026-02-13 06:57:47.206195 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-13 06:57:47.206200 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-13 06:57:47.206205 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:57:47.206210 | orchestrator | testbed-node-0 : ok=121  changed=7  unreachable=0 failed=1  skipped=164  rescued=1  ignored=0 2026-02-13 06:57:47.206217 | orchestrator | testbed-node-1 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:57:47.206222 | orchestrator | testbed-node-2 : ok=25  changed=1  unreachable=0 failed=0 skipped=57  rescued=0 ignored=0 2026-02-13 06:57:47.206227 | orchestrator | testbed-node-3 : ok=33  changed=1  unreachable=0 failed=0 skipped=74  rescued=0 ignored=0 2026-02-13 06:57:47.206233 | orchestrator | testbed-node-4 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-13 06:57:47.206241 | orchestrator | testbed-node-5 : ok=33  changed=1  unreachable=0 failed=0 skipped=71  rescued=0 ignored=0 2026-02-13 06:57:47.206248 | orchestrator | 2026-02-13 06:57:47.206255 | orchestrator | 2026-02-13 06:57:47.206262 | orchestrator | 2026-02-13 06:57:47.206269 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-13 06:57:47.206277 | orchestrator | Friday 13 February 2026 06:57:47 +0000 (0:00:02.553) 0:41:47.876 ******* 2026-02-13 06:57:47.206284 | orchestrator | =============================================================================== 2026-02-13 06:57:47.206291 | orchestrator | Container | waiting for the containerized monitor to join the quorum... 1880.37s 2026-02-13 06:57:47.206297 | orchestrator | Gather and delegate facts ---------------------------------------------- 33.93s 2026-02-13 06:57:47.206311 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.04s 2026-02-13 06:57:47.794699 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 13.47s 2026-02-13 06:57:47.794803 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 10.81s 2026-02-13 06:57:47.794817 | orchestrator | Set cluster configs ---------------------------------------------------- 10.43s 2026-02-13 06:57:47.794859 | orchestrator | ceph-mon : Set cluster configs ------------------------------------------ 9.96s 2026-02-13 06:57:47.794871 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.95s 2026-02-13 06:57:47.794882 | orchestrator | Gather facts ------------------------------------------------------------ 6.04s 2026-02-13 06:57:47.794893 | orchestrator | Gather facts on all Ceph hosts for following reference ------------------ 5.56s 2026-02-13 06:57:47.794904 | orchestrator | Stop ceph mon ----------------------------------------------------------- 4.05s 2026-02-13 06:57:47.794923 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.97s 2026-02-13 06:57:47.794941 | orchestrator | ceph-mon : Check if monitor initial keyring already exists -------------- 3.58s 2026-02-13 06:57:47.794960 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 3.46s 2026-02-13 06:57:47.794979 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.23s 2026-02-13 06:57:47.794997 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.23s 2026-02-13 06:57:47.795016 | orchestrator | ceph-mon : Start the monitor service ------------------------------------ 3.23s 2026-02-13 06:57:47.795035 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 3.10s 2026-02-13 06:57:47.795054 | orchestrator | ceph-infra : Add logrotate configuration -------------------------------- 3.05s 2026-02-13 06:57:47.795073 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.99s 2026-02-13 06:57:48.330736 | orchestrator | ERROR 2026-02-13 06:57:48.331025 | orchestrator | { 2026-02-13 06:57:48.331071 | orchestrator | "delta": "2:04:42.735118", 2026-02-13 06:57:48.331100 | orchestrator | "end": "2026-02-13 06:57:48.092976", 2026-02-13 06:57:48.331126 | orchestrator | "msg": "non-zero return code", 2026-02-13 06:57:48.331150 | orchestrator | "rc": 2, 2026-02-13 06:57:48.331173 | orchestrator | "start": "2026-02-13 04:53:05.357858" 2026-02-13 06:57:48.331195 | orchestrator | } failure 2026-02-13 06:57:48.534633 | 2026-02-13 06:57:48.534758 | PLAY RECAP 2026-02-13 06:57:48.534816 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-13 06:57:48.534868 | 2026-02-13 06:57:48.760026 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-13 06:57:48.761191 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-13 06:57:49.547248 | 2026-02-13 06:57:49.547428 | PLAY [Post output play] 2026-02-13 06:57:49.564833 | 2026-02-13 06:57:49.564977 | LOOP [stage-output : Register sources] 2026-02-13 06:57:49.636202 | 2026-02-13 06:57:49.636619 | TASK [stage-output : Check sudo] 2026-02-13 06:57:50.488893 | orchestrator | sudo: a password is required 2026-02-13 06:57:50.676666 | orchestrator | ok: Runtime: 0:00:00.012872 2026-02-13 06:57:50.684405 | 2026-02-13 06:57:50.684523 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-13 06:57:50.725930 | 2026-02-13 06:57:50.726201 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-13 06:57:50.805595 | orchestrator | ok 2026-02-13 06:57:50.814673 | 2026-02-13 06:57:50.814814 | LOOP [stage-output : Ensure target folders exist] 2026-02-13 06:57:51.291125 | orchestrator | ok: "docs" 2026-02-13 06:57:51.291543 | 2026-02-13 06:57:51.553016 | orchestrator | ok: "artifacts" 2026-02-13 06:57:51.817245 | orchestrator | ok: "logs" 2026-02-13 06:57:51.837975 | 2026-02-13 06:57:51.838145 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-13 06:57:51.876918 | 2026-02-13 06:57:51.877215 | TASK [stage-output : Make all log files readable] 2026-02-13 06:57:52.186496 | orchestrator | ok 2026-02-13 06:57:52.192954 | 2026-02-13 06:57:52.193072 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-13 06:57:52.227045 | orchestrator | skipping: Conditional result was False 2026-02-13 06:57:52.238753 | 2026-02-13 06:57:52.238903 | TASK [stage-output : Discover log files for compression] 2026-02-13 06:57:52.262550 | orchestrator | skipping: Conditional result was False 2026-02-13 06:57:52.273077 | 2026-02-13 06:57:52.273208 | LOOP [stage-output : Archive everything from logs] 2026-02-13 06:57:52.317741 | 2026-02-13 06:57:52.317914 | PLAY [Post cleanup play] 2026-02-13 06:57:52.326427 | 2026-02-13 06:57:52.326530 | TASK [Set cloud fact (Zuul deployment)] 2026-02-13 06:57:52.382617 | orchestrator | ok 2026-02-13 06:57:52.393060 | 2026-02-13 06:57:52.393169 | TASK [Set cloud fact (local deployment)] 2026-02-13 06:57:52.417200 | orchestrator | skipping: Conditional result was False 2026-02-13 06:57:52.430644 | 2026-02-13 06:57:52.430775 | TASK [Clean the cloud environment] 2026-02-13 06:57:53.080710 | orchestrator | 2026-02-13 06:57:53 - clean up servers 2026-02-13 06:57:53.889311 | orchestrator | 2026-02-13 06:57:53 - testbed-manager 2026-02-13 06:57:53.976962 | orchestrator | 2026-02-13 06:57:53 - testbed-node-5 2026-02-13 06:57:54.065515 | orchestrator | 2026-02-13 06:57:54 - testbed-node-3 2026-02-13 06:57:54.161819 | orchestrator | 2026-02-13 06:57:54 - testbed-node-1 2026-02-13 06:57:54.243999 | orchestrator | 2026-02-13 06:57:54 - testbed-node-0 2026-02-13 06:57:54.344609 | orchestrator | 2026-02-13 06:57:54 - testbed-node-4 2026-02-13 06:57:54.438482 | orchestrator | 2026-02-13 06:57:54 - testbed-node-2 2026-02-13 06:57:54.531354 | orchestrator | 2026-02-13 06:57:54 - clean up keypairs 2026-02-13 06:57:54.554946 | orchestrator | 2026-02-13 06:57:54 - testbed 2026-02-13 06:57:54.592283 | orchestrator | 2026-02-13 06:57:54 - wait for servers to be gone 2026-02-13 06:58:03.319587 | orchestrator | 2026-02-13 06:58:03 - clean up ports 2026-02-13 06:58:03.524157 | orchestrator | 2026-02-13 06:58:03 - 11e74f0f-eb20-4fb9-983c-e58b4ff913e2 2026-02-13 06:58:04.001287 | orchestrator | 2026-02-13 06:58:04 - 50d7a29e-fb07-42b7-a7b4-e9572940fad6 2026-02-13 06:58:04.275803 | orchestrator | 2026-02-13 06:58:04 - 59761959-695c-408a-8962-1fcb720815ee 2026-02-13 06:58:04.531163 | orchestrator | 2026-02-13 06:58:04 - a0efab77-473d-460e-8b46-34940026d99d 2026-02-13 06:58:04.735807 | orchestrator | 2026-02-13 06:58:04 - b45d947c-ad1b-41cf-acec-c3f10d073339 2026-02-13 06:58:05.014070 | orchestrator | 2026-02-13 06:58:05 - cb59f827-21d3-4f0c-a04a-aecc886bc07f 2026-02-13 06:58:05.255798 | orchestrator | 2026-02-13 06:58:05 - e62c53e5-cfaa-42bf-ab12-191a8b395bca 2026-02-13 06:58:05.486269 | orchestrator | 2026-02-13 06:58:05 - clean up volumes 2026-02-13 06:58:05.618613 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-5-node-base 2026-02-13 06:58:05.661121 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-4-node-base 2026-02-13 06:58:05.705445 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-3-node-base 2026-02-13 06:58:05.745416 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-0-node-base 2026-02-13 06:58:05.785724 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-2-node-base 2026-02-13 06:58:05.827564 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-1-node-base 2026-02-13 06:58:05.867383 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-manager-base 2026-02-13 06:58:05.908952 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-8-node-5 2026-02-13 06:58:05.955717 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-2-node-5 2026-02-13 06:58:05.995168 | orchestrator | 2026-02-13 06:58:05 - testbed-volume-7-node-4 2026-02-13 06:58:06.036045 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-5-node-5 2026-02-13 06:58:06.081361 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-6-node-3 2026-02-13 06:58:06.122264 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-3-node-3 2026-02-13 06:58:06.170614 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-4-node-4 2026-02-13 06:58:06.216118 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-0-node-3 2026-02-13 06:58:06.256959 | orchestrator | 2026-02-13 06:58:06 - testbed-volume-1-node-4 2026-02-13 06:58:06.299254 | orchestrator | 2026-02-13 06:58:06 - disconnect routers 2026-02-13 06:58:06.423617 | orchestrator | 2026-02-13 06:58:06 - testbed 2026-02-13 06:58:07.489939 | orchestrator | 2026-02-13 06:58:07 - clean up subnets 2026-02-13 06:58:07.552585 | orchestrator | 2026-02-13 06:58:07 - subnet-testbed-management 2026-02-13 06:58:07.708820 | orchestrator | 2026-02-13 06:58:07 - clean up networks 2026-02-13 06:58:07.832592 | orchestrator | 2026-02-13 06:58:07 - net-testbed-management 2026-02-13 06:58:08.116463 | orchestrator | 2026-02-13 06:58:08 - clean up security groups 2026-02-13 06:58:08.155067 | orchestrator | 2026-02-13 06:58:08 - testbed-management 2026-02-13 06:58:08.259589 | orchestrator | 2026-02-13 06:58:08 - testbed-node 2026-02-13 06:58:08.369383 | orchestrator | 2026-02-13 06:58:08 - clean up floating ips 2026-02-13 06:58:08.406419 | orchestrator | 2026-02-13 06:58:08 - 81.163.192.228 2026-02-13 06:58:08.749687 | orchestrator | 2026-02-13 06:58:08 - clean up routers 2026-02-13 06:58:08.811551 | orchestrator | 2026-02-13 06:58:08 - testbed 2026-02-13 06:58:09.985285 | orchestrator | ok: Runtime: 0:00:17.032972 2026-02-13 06:58:09.989771 | 2026-02-13 06:58:09.989970 | PLAY RECAP 2026-02-13 06:58:09.990145 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-13 06:58:09.990266 | 2026-02-13 06:58:10.148636 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-13 06:58:10.149791 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-13 06:58:10.892461 | 2026-02-13 06:58:10.892629 | PLAY [Cleanup play] 2026-02-13 06:58:10.909154 | 2026-02-13 06:58:10.909287 | TASK [Set cloud fact (Zuul deployment)] 2026-02-13 06:58:10.962257 | orchestrator | ok 2026-02-13 06:58:10.970140 | 2026-02-13 06:58:10.970271 | TASK [Set cloud fact (local deployment)] 2026-02-13 06:58:11.004831 | orchestrator | skipping: Conditional result was False 2026-02-13 06:58:11.023252 | 2026-02-13 06:58:11.023456 | TASK [Clean the cloud environment] 2026-02-13 06:58:12.151270 | orchestrator | 2026-02-13 06:58:12 - clean up servers 2026-02-13 06:58:12.614631 | orchestrator | 2026-02-13 06:58:12 - clean up keypairs 2026-02-13 06:58:12.632617 | orchestrator | 2026-02-13 06:58:12 - wait for servers to be gone 2026-02-13 06:58:12.676120 | orchestrator | 2026-02-13 06:58:12 - clean up ports 2026-02-13 06:58:12.751885 | orchestrator | 2026-02-13 06:58:12 - clean up volumes 2026-02-13 06:58:12.819866 | orchestrator | 2026-02-13 06:58:12 - disconnect routers 2026-02-13 06:58:12.843323 | orchestrator | 2026-02-13 06:58:12 - clean up subnets 2026-02-13 06:58:12.867076 | orchestrator | 2026-02-13 06:58:12 - clean up networks 2026-02-13 06:58:13.021378 | orchestrator | 2026-02-13 06:58:13 - clean up security groups 2026-02-13 06:58:13.058159 | orchestrator | 2026-02-13 06:58:13 - clean up floating ips 2026-02-13 06:58:13.086746 | orchestrator | 2026-02-13 06:58:13 - clean up routers 2026-02-13 06:58:13.563589 | orchestrator | ok: Runtime: 0:00:01.314845 2026-02-13 06:58:13.568482 | 2026-02-13 06:58:13.568664 | PLAY RECAP 2026-02-13 06:58:13.568801 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-13 06:58:13.568871 | 2026-02-13 06:58:13.697489 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-13 06:58:13.700902 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-13 06:58:14.453445 | 2026-02-13 06:58:14.453606 | PLAY [Base post-fetch] 2026-02-13 06:58:14.469165 | 2026-02-13 06:58:14.469308 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-13 06:58:14.534882 | orchestrator | skipping: Conditional result was False 2026-02-13 06:58:14.548434 | 2026-02-13 06:58:14.548648 | TASK [fetch-output : Set log path for single node] 2026-02-13 06:58:14.606747 | orchestrator | ok 2026-02-13 06:58:14.616630 | 2026-02-13 06:58:14.616793 | LOOP [fetch-output : Ensure local output dirs] 2026-02-13 06:58:15.100375 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/logs" 2026-02-13 06:58:15.380944 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/artifacts" 2026-02-13 06:58:15.656867 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/eaf6616a4e9e46b08359ec9d54172af9/work/docs" 2026-02-13 06:58:15.684780 | 2026-02-13 06:58:15.685215 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-13 06:58:16.664822 | orchestrator | changed: .d..t...... ./ 2026-02-13 06:58:16.665196 | orchestrator | changed: All items complete 2026-02-13 06:58:16.665261 | 2026-02-13 06:58:17.419467 | orchestrator | changed: .d..t...... ./ 2026-02-13 06:58:18.145388 | orchestrator | changed: .d..t...... ./ 2026-02-13 06:58:18.173992 | 2026-02-13 06:58:18.174148 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-13 06:58:18.214795 | orchestrator | skipping: Conditional result was False 2026-02-13 06:58:18.217179 | orchestrator | skipping: Conditional result was False 2026-02-13 06:58:18.234664 | 2026-02-13 06:58:18.234796 | PLAY RECAP 2026-02-13 06:58:18.234938 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-13 06:58:18.234985 | 2026-02-13 06:58:18.362157 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-13 06:58:18.363790 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-13 06:58:19.131727 | 2026-02-13 06:58:19.131890 | PLAY [Base post] 2026-02-13 06:58:19.146708 | 2026-02-13 06:58:19.146873 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-13 06:58:20.138756 | orchestrator | changed 2026-02-13 06:58:20.149025 | 2026-02-13 06:58:20.149172 | PLAY RECAP 2026-02-13 06:58:20.149247 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-13 06:58:20.149317 | 2026-02-13 06:58:20.272357 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-13 06:58:20.273487 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-13 06:58:21.061113 | 2026-02-13 06:58:21.061283 | PLAY [Base post-logs] 2026-02-13 06:58:21.072432 | 2026-02-13 06:58:21.072573 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-13 06:58:21.556542 | localhost | changed 2026-02-13 06:58:21.566997 | 2026-02-13 06:58:21.567158 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-13 06:58:21.602753 | localhost | ok 2026-02-13 06:58:21.606076 | 2026-02-13 06:58:21.606181 | TASK [Set zuul-log-path fact] 2026-02-13 06:58:21.633705 | localhost | ok 2026-02-13 06:58:21.650580 | 2026-02-13 06:58:21.650743 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-13 06:58:21.687715 | localhost | ok 2026-02-13 06:58:21.692782 | 2026-02-13 06:58:21.692937 | TASK [upload-logs : Create log directories] 2026-02-13 06:58:22.189813 | localhost | changed 2026-02-13 06:58:22.192655 | 2026-02-13 06:58:22.192767 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-13 06:58:22.675150 | localhost -> localhost | ok: Runtime: 0:00:00.007063 2026-02-13 06:58:22.684886 | 2026-02-13 06:58:22.685075 | TASK [upload-logs : Upload logs to log server] 2026-02-13 06:58:23.280939 | localhost | Output suppressed because no_log was given 2026-02-13 06:58:23.284699 | 2026-02-13 06:58:23.284888 | LOOP [upload-logs : Compress console log and json output] 2026-02-13 06:58:23.333435 | localhost | skipping: Conditional result was False 2026-02-13 06:58:23.338815 | localhost | skipping: Conditional result was False 2026-02-13 06:58:23.346184 | 2026-02-13 06:58:23.346422 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-13 06:58:23.392902 | localhost | skipping: Conditional result was False 2026-02-13 06:58:23.393543 | 2026-02-13 06:58:23.397054 | localhost | skipping: Conditional result was False 2026-02-13 06:58:23.410521 | 2026-02-13 06:58:23.410775 | LOOP [upload-logs : Upload console log and json output]